Human-like trajectory generation and footstep planning has been a longstanding open problem in humanoid robotics. Meanwhile, research in computer graphics kept developing machine-learning methods for character animation based on training human-like models directly on motion capture data. Such methods proved effective in virtual environments, mainly focusing on trajectory visualization. This paper presents ADHERENT, a system architecture integrating machine-learning methods used in computer graphics with whole-body control methods employed in robotics to generate and stabilize human-like trajectories for humanoid robots. Leveraging human motion capture locomotion data, ADHERENT yields a general footstep planner, including forward, sideways, and backward walking trajectories that blend smoothly from one to another. At the joint configuration level, ADHERENT computes data-driven whole-body postural references coherent with the generated footsteps, thus increasing the human likeness of the resulting robot motion. Extensive validations of the proposed architecture are presented with both simulations and real experiments on the iCub humanoid robot. Supplementary video: https://sites.google.com/view/adherent-trajectory-learning.