• Login
    View Item 
    •   IIMA Institutional Repository Home
    • Faculty Publications (Bibliographic)
    • Open Access Journal Articles
    • View Item
    •   IIMA Institutional Repository Home
    • Faculty Publications (Bibliographic)
    • Open Access Journal Articles
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Interpretable classifier models for decision support using high utility gain patterns

    Thumbnail
    View/Open
    a2.pdf (1.817Mb)
    Date
    2024-09-06
    Author
    Krishnamoorthy, Srikumar
    Metadata
    Show full item record
    Abstract
    Ensemble models such as gradient boosting and random forests are proven to offer the best predictive performance on a wide variety of supervised learning problems. The high performance of these black box models, however, comes at a cost of model interpretability. They are also inadequate to meet regulatory demands and explainability needs of organizations. The model interpretability in high performance black-box models is achieved with the help of post-hoc explainable models such as Local Interpretable Model agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). This paper presents an alternate intrinsic classifier model that extracts a class of higher order patterns and embeds them into an interpretable learning model. More specifically, the proposed model extracts novel High Utility Gain (HUG) patterns that capture higher order interactions, transforms the model input data into a new space, and applies interpretable classifier methods on the transformed space. We conduct rigorous experiments on forty benchmark binary and multi-class classification datasets to evaluate the proposed model against the state-of-the-art ensemble and interpretable classifier models. The proposed model was comprehensively assessed on three key dimensions: 1) quality of predictions using classifier measures such as accuracy, $F_{1}$ , AUC, H-measure, and logistic loss, 2) computational performance on large and high-dimensional data, and 3) interpretability aspects. The HUG-based learning model was found to deliver performance comparable to that of the state-of-the-art ensemble models. Our model was also found to achieve 2-40% (45%) prediction quality (interpretability) improvements with significantly lower computational requirements over other interpretable classifier models. Furthermore, we present case studies in finance and healthcare domains and generate one- and two-dimensional HUG profiles to illustrate the interpretability aspects of our HUG models. The proposed solution offers an alternate approach to build high performance and transparent machine learning classifier models. We hope that our ML solution help organizations meet their growing regulatory and explainability needs.
    URI
    http://hdl.handle.net/11718/27579
    Collections
    • Open Access Journal Articles [352]

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of IIMA Institutional RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    Statistics

    View Usage Statistics

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV