Conversational Question Answering (ConvQA) systems trained over a dataset about popular characters in Wikipedia (QuAC) have attained impressive performance. Still, widespread adoption of such systems require cost-effective domain and language adaptation. In this talk I will review our experience deploying such systems in new domains. First I will show that fine-tuning a pre-trained ConvQA system on a single FAQ domain yields high-quality systems in other FAQ domains. Second, we will show that a small dataset in Basque suffices to obtain comparable performance. Third, we will also present strong results on COVID-related scientific literature. Finally I will present a technique that allows to improve the performance in new domains after deployment, using user feedback only, and no supervised in-domain training. All in all, our research seems to indicate that ConvQA is ready for cost-effective deployment in new domains.