Abstract

Support Vector Machines (SVMs) have attracted a great deal of attention and achieved huge
success mainly as powerful classifiers. However, one of the main drawbacks of SVMs is the
lack of intelligibility of the results. SVMs are “black box” systems that do not provide insights
on the reasons of a classification or explanations - the results produced must be taken on faith.
We are concerned about the problem of intelligibility because from our practical experience,
domain experts strongly prefer Machine Learning with explanations. In that context, we have
developed a new approach to provide explanations and make SVMs results more actionable.
The underlying idea is to produce explanations by applying symbolic Machine Learning models
to SVM-produced ranking results. More precisely, we are contrasting SVM results from
the top and bottom of rankings to detect the main characteristic properties of the classes which
can be useful for the practitioner to direct actions and understand the system. We applied our
approach on several datasets. Our empirical results seem promising and show the utility of our
methodology with regard to the intelligibility and actionability of an SVM output.

Key words: Support Vector Machines (SVMs); Ranking, Rule Extraction; Actionability.