R packages for eXplainable Artificial Intelligence

[This article was first published on R in ResponsibleML on Medium, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Joint work with Szymon Maksymiuk and Alicja Gosiewska.

The growing demand for fast and automated development of predictive models has contributed to the popularity of machine learning frameworks. ML frameworks allow us to quickly build models that maximize a selected performance measure. However, it turned out that as the result we are getting black-box models and too often it is difficult to detect certain problems early enough. Insufficiently tested models quickly lose their effectiveness, lead to unfair decisions, discriminate, are deferred by users, do not give the possibility of an appeal.

In order to build models responsibly, we need tools for exploration, debugging, and explanation of model predictions. There are more and more methods that can be used for this purpose. The map below divides them into three groups: (1) tools to build models that can be interpreted by design (although it is not always easy), (2) tools to explore models of specific structures, (3) universal tools to explore models in a structure agnostic fashion.

The map of XAI-related tools and methods.

We have prepared an overview of the most popular R-packages, which can be used to build interpretable models or to explore complex ones. Examples of knitr notebooks for more than 30 packages are available at http://xai-tools.drwhy.ai/.

We hope that collecting these packages in one place will increase their visibility and thus lead to building better, more transparent, and reliable models. These examples show that individual packages are easy to use but also provide many different features.
If you see some other package that shall be included to the list or points that shall be taken into account let us know (add an issue here). Contributions are more than welcome!

We would like to thank Patrick Hall for helping us to fill the XAI-map and Hubert Baniecki and Anna Kozak for their valuable tips and comments.

If you are interested in other posts about explainable, fair, and responsible ML, follow #ResponsibleML on Medium.

In order to see more R related content visit https://www.r-bloggers.com


R packages for eXplainable Artificial Intelligence was originally published in ResponsibleML on Medium, where people are continuing the conversation by highlighting and responding to this story.

To leave a comment for the author, please follow the link and comment on their blog: R in ResponsibleML on Medium.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)