Timezone: »
Poster
Staying up to Date with Online Content Changes Using Reinforcement Learning for Scheduling
Andrey Kolobov · Yuval Peres · Cheng Lu · Eric Horvitz
Tue Dec 10 05:30 PM -- 07:30 PM (PST) @ East Exhibition Hall B + C #213
From traditional Web search engines to virtual assistants and Web accelerators, services that rely on online information need to continually keep track of remote content changes by explicitly requesting content updates from remote sources (e.g., web pages). We propose a novel optimization objective for this setting that has several practically desirable properties, and efficient algorithms for it with optimality guarantees even in the face of mixed content change observability and initially unknown change model parameters. Experiments on 18.5M URLs crawled daily for 14 weeks show significant advantages of this approach over prior art.
Author Information
Andrey Kolobov (Microsoft Research)
Yuval Peres (N/A)
Cheng Lu (Microsoft)
Eric Horvitz (Microsoft Research)
More from the Same Authors
-
2021 : Bursting Scientific Filter Bubbles: Boosting Innovation via Novel Author Discovery »
Jason Portenoy · Jevin West · Eric Horvitz · Daniel Weld · Tom Hope -
2021 : A Search Engine for Discovery of Scientific Challenges and Directions »
Dan Lahav · Jon Saad-Falcon · Duen Horng Chau · Diyi Yang · Eric Horvitz · Daniel Weld · Tom Hope -
2023 Poster: Survival Instinct in Offline Reinforcement Learning »
Anqi Li · Dipendra Misra · Andrey Kolobov · Ching-An Cheng -
2022 Poster: MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control »
Nolan Wagener · Andrey Kolobov · Felipe Vieira Frujeri · Ricky Loynd · Ching-An Cheng · Matthew Hausknecht -
2021 Poster: Heuristic-Guided Reinforcement Learning »
Ching-An Cheng · Andrey Kolobov · Adith Swaminathan -
2020 : Closing Remarks: Eric Horvitz (Microsoft) »
Eric Horvitz -
2020 Workshop: Cooperative AI »
Thore Graepel · Dario Amodei · Vincent Conitzer · Allan Dafoe · Gillian Hadfield · Eric Horvitz · Sarit Kraus · Kate Larson · Yoram Bachrach -
2020 Poster: Policy Improvement via Imitation of Multiple Oracles »
Ching-An Cheng · Andrey Kolobov · Alekh Agarwal -
2020 Spotlight: Policy Improvement via Imitation of Multiple Oracles »
Ching-An Cheng · Andrey Kolobov · Alekh Agarwal -
2020 Poster: Safe Reinforcement Learning via Curriculum Induction »
Matteo Turchetta · Andrey Kolobov · Shital Shah · Andreas Krause · Alekh Agarwal -
2020 Spotlight: Safe Reinforcement Learning via Curriculum Induction »
Matteo Turchetta · Andrey Kolobov · Shital Shah · Andreas Krause · Alekh Agarwal -
2019 Poster: Efficient Forward Architecture Search »
Hanzhang Hu · John Langford · Rich Caruana · Saurajit Mukherjee · Eric Horvitz · Debadeepta Dey -
2019 Poster: Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting »
Aditya Grover · Jiaming Song · Ashish Kapoor · Kenneth Tran · Alekh Agarwal · Eric Horvitz · Stefano Ermon -
2017 Poster: Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach »
Emmanouil Platanios · Hoifung Poon · Tom M Mitchell · Eric Horvitz -
2012 Poster: Patient Risk Stratification for Hospital-Associated C. Diff as a Time-Series Classification Task »
Jenna Wiens · John Guttag · Eric Horvitz -
2012 Spotlight: Patient Risk Stratification for Hospital-Associated C. Diff as a Time-Series Classification Task »
Jenna Wiens · John Guttag · Eric Horvitz -
2009 Poster: Breaking Boundaries Between Induction Time and Diagnosis Time Active Information Acquisition »
Ashish Kapoor · Eric Horvitz