Interpretable machine learning with Python


Find out what you will learn throughout the course (if the video does not show, try allowing cookies in your browser).


What you'll learn



👉 Explain intrinsically interpretable models.

👉 Explain black box models with post-hoc methods.

👉 Make global and local explanations.

👉 Feature permutation importance.

👉 Partial dependency plots, ALE plots and ICE plots.

👉 LIME, Shapley values and SHAP.

👉 Surrogate models and perturbation explainability methods.

👉 Explain with Python open source libraries.



What you'll get

Website Counter
00
Hours on-demand video
00
Jupyter Notebooks
00
Quizzes and Assignments
Lifetime access to course

Lifetime access





Instructor support

Instructor support





Certificate of completion

Certificate of completion




💬 English subtitles



Instructor

Soledad Galli, data scientist, open source developer, and instructor.

Soledad Galli, PhD

Sole is a lead data scientist, instructor, and developer of open source software. She created and maintains the Python library Feature-engine, which allows us to impute data, encode categorical variables, transform, create, and select features. Sole is also the author of the"Python Feature Engineering Cookbook," published by Packt.

More about Sole on LinkedIn.


Pricing


Can't afford it? Get in touch.



30 days money back guarantee


If you're disappointed for whatever reason, you'll get a full refund.

So you can buy with confidence.

Interpretable Machine Learning Course


Welcome to the most comprehensive online course on Interpretable Machine Learning.

In this course, you will learn methods and tools to interpret intrinsically explainable machine learning models like linear regression, decision trees, random forests and gradient boosting machines. And you will also discover methods to explain black-box algorithms, like deep neural networks, clustering methods, anomaly detection models and more.

In our Interpretable Machine Learning Course you will find detailed explanations of how the methods work, their advantages, the risks of using interpretable machine learning methods and how to implement these algorithms in Python.


What is interpretability in machine learning?

Interpretability in machine learning refers to our ability to understand and explain how machine learning algorithms make predictions. It involves unraveling the inner workings of machine learning models to gain insights into their decision-making processes, or using alternative post-hoc methods to understand the output of more complex models.

Interpretable machine learning enables us to understand what are the main drivers of a model’s prediction, and also, why a model produced a particular prediction, providing transparency and accountability in AI systems.


In the context of machine learning, interpretability helps answer questions such as:

  1. How does the model use input features to make predictions?
  2. What are the most important features driving the model's decisions?
  3. Are there any biases or unintended consequences in the model's decision-making process?
  4. Can we identify and rectify any errors or inconsistencies in the model's behavior?


In the business context, interpreting our ML models allows us to understand if:

  1. We are being fair and ethical towards our clients.
  2. Our models are the most accurate possible and therefore maximize profit.
  3. We are vulnerable to adversarial attacks, exposing our brand to reputation risk.


Why is interpretable machine learning important?

Interpretability has become an indispensable aspect of modern machine learning and artificial intelligence. Interpretable ML empowers organizations to deploy models they can understand and therefore trust.


Interpretations of machine learning models offer valuable insights that can be utilized in various ways. For example:

  1. Model Debugging and Improvement: Interpretations can help identify and rectify issues in the model's behavior.
  2. Model Transparency and Explainability: Interpretations provide transparency by explaining the underlying factors driving the model's decisions.
  3. Feature Importance Analysis: Interpretations allow us to identify the most influential features in the model's decision-making process, and hence we can deploy simpler, faster, and more interpretable models.
  4. Insights into Complex Relationships: Interpretations provide insights into the complex relationships between the variables in our data.


As the demand for explainable AI continues to rise across various industries, mastering interpretability techniques has become crucial for data scientists, researchers, and data professionals.


What will you learn in this course?

In our Interpretable Machine Learning Course, you will find the knowledge and skills you need to explain intrinsically explainable and black-box machine learning models. Whether you are working with linear models, decision trees, neural networks, or deep learning algorithms, this course will empower you to make sense of their inner workings, their outputs and extract meaningful insights from your big data sources.

Throughout the course, we will cover a wide range of interpretability methods, including model-specific and model-agnostic approaches.


Model-specific methods

We will discuss the knowledge and tools to extract feature importance metrics from intrinsically explainable models. You will understand how exactly the feature importance is calculated by Python libraries like Scikit-learn, XGBoost and lightGBMs.

Next, we will move on to determine feature attribution for individual samples. Here, we will leverage the use of bespoke Python libraries like treeinterpreter and ELI5.


Model-agnostic methods

We will discuss visualization techniques, perturbation analysis and the use of surrogates to explain our models globally. Among global methods, you’ll learn to calculate permutation feature importance, and to construct partial dependence plots.

Next, we will move on to explain our models locally by using the popular methods LIME and SHAP. These methods use perturbation of simpler data representations, like groups of words, or groups of pixels to explain complex models like deep neuronal networks.

After understanding how these methods work, you will learn to leverage popular Python libraries such as LIME and SHAP to interpret and explain complex machine learning models effectively.


What you’ll learn

These are the key concepts and skills you will gain after completing this course:

  • Understand the importance of interpretability in machine learning and artificial intelligence.
  • Learn techniques to explain black-box models, including neural networks and deep learning algorithms.
  • Gain insights into the interpretation of regression and classification models.
  • Carry on model optimization for interpretability.
  • Explore interpretability techniques for tabular data and image-based datasets.
  • Master the use of LIME, SHAP, and other tools for model interpretation and visualization.
  • Use counterfactual explanations, saliency maps, and surrogate models to explain black-box algorithms.
  • Apply interpretability techniques to real-world scenarios, such as healthcare, finance, and more.


By the end of this course, you will be equipped with the ability to unravel the complexities of machine learning models, understand their decision-making processes, and communicate their insights effectively.

Whether you are a data scientist, machine learning engineer, or big data professional looking to enhance your understanding of machine learning interpretability, this course is designed to empower you with the skills and tools necessary to excel in your field.


The instructor

Soledad Galli will be leading you through this course. Sole’s been recognized as LinkedIn's top voice in data science and analytics in both 2018 and 2024. Additionally, she is the accomplished author of Packt's Python Feature Engineering Cookbook and Leanpub's Feature Selection in Machine Learning book.

After working in finance and insurance companies, both industries that are highly regulated and need to be able to explain the decisions they make, Sole is in a unique position to tell you more about the importance of interpretable ML, and also show you how to make sense of your models both practically and theoretically.


Course prerequisites

To make the most out of this course, you need to have:

  • Basic knowledge of machine learning algorithms and Python programming.
  • Familiarity with machine learning models for regression and classification, including logistic and linear regression, random forest classifiers, and gradient boosting machines.
  • Familiarity with model performance metrics like ROC-AUC, MSE, and accuracy.


Who is this course for?

This course is designed for professionals and students seeking a deeper understanding of interpretability techniques. It is suitable for data scientists, researchers, and professionals in computer science, data science, and related fields, who want to improve their skills and advance their careers.

 

To wrap-up

This comprehensive machine learning interpretability course contains over 50 lectures spread across 10 hours of in-demand video content, more than 10 quizzes and assessments, and demonstrations using real-world use cases.

All topics include hands-on Python code examples in Jupyter notebooks that you can use for reference, practice, and reuse in your own projects.


The course comes with a 30-day money-back guarantee, so you can sign up today with no risk.


So what are you waiting for? Join us as we unlock the power of interpretable machine learning together and start making sense of your machine learning models.

Course Curriculum

  Welcome
Available in days
days after you enroll
  Course material
Available in days
days after you enroll
  Interpretability in Machine Learning
Available in days
days after you enroll
  Linear regression
Available in days
days after you enroll
  Logistic regression
Available in days
days after you enroll
  Decision trees
Available in days
days after you enroll
  Random Forests
Available in days
days after you enroll
  Gradient boosting machines
Available in days
days after you enroll
  Permutation feature importance
Available in days
days after you enroll
  Partial dependence plots
Available in days
days after you enroll
  Accumulated local effects plots
Available in days
days after you enroll
  Individual Condtional Expectation
Available in days
days after you enroll
  Surrogate models
Available in days
days after you enroll
  LIME
Available in days
days after you enroll
  Shapley Values and SHAP
Available in days
days after you enroll
  Congratulations! You did it!
Available in days
days after you enroll

Frequently Asked Questions


Are "interpretable machine learning" and "explainable AI" the same thing?


Yes, they are. Both "interpretable machine learning" and "explainable AI" refer to the process of making sense of machine learning model outputs.

Hence, whether you found our course through our interpretable ML landing page or our xai landing page, you will enroll in exactly the same course.


When does the course begin and end?


You can start taking the course from the moment you enroll. The course is self-paced, so you can watch the tutorials and apply what you learn whenever you find it most convenient.



For how long can I access the course?


The course has lifetime access. This means that once you enroll, you will have unlimited access to the course for as long as you like.



What if I don't like the course?


There is a 30-day money back guarantee. If you don't find the course useful, contact us within the first 30 days of purchase and you will get a full refund.



Will I get a certificate?


Yes, you'll get a certificate of completion after completing all lectures, quizzes and assignments.

Follow us 👇