We study theoretical and applied techniques for building interpretable machine learning models. We also research how to extract explanations from black-box pre-trained models (e.g., deep networks) via concepts, counterfactual explanations and interventions.