If we want to build machines that think and learn like humans do, and that can learn and think with people, our best bet is to build machines that can learn to write programs expressing their thoughts in human-understandable code. These machines should also be able to learn from the kinds of data that humans naturally consume and produce: one or a few examples of program execution, and natural language descriptions of program goals or high-level structure. We are far from achieving this goal, but the last few years have seen intriguing first steps and opened up a new set of hard problems for future work. I will talk about some lessons learned: how we might best combine neural and symbolic approaches under the broad rubric of probabilistic inference in hierarchical generative models for code, and the synergies to be gained from looking at both execution examples and natural language as sources of data. I will also discuss promising near-term challenge domains that capture foundational human capacities for learning concepts, systems of concepts (or domain theories) and causal models, and where the next generation of program learning approaches could make important progress.