• Login
    View Item 
    •   IIMA Institutional Repository Home
    • Faculty Publications (Bibliographic)
    • Journal Articles
    • View Item
    •   IIMA Institutional Repository Home
    • Faculty Publications (Bibliographic)
    • Journal Articles
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    The Influence of Status on Evaluations: Evidence from Online Coding Contests

    Thumbnail
    View/Open
    The_Influence_of_Status_on_Evaluations_Evidence_from_Online_Coding_Contests.pdf (821.2Kb)
    Date
    2022-12
    Author
    Deodhar, Swanand J.
    Babar, Yash
    Burtch, Gordon
    Metadata
    Show full item record
    Abstract
    In many instances, online contest platforms rely on contestants to ensure submission quality. This scalable evaluation mechanism offers a collective benefit. However, contestants may also leverage it to achieve personal, competitive benefits. Our study examines this tension from a status-theoretic perspective, suggesting that the conflict between competitive and collective benefits, and the net implication for evaluation efficacy, is influenced by contestants’ status. On the one hand, contestants of lower status may be viewed as less skilled and hence more likely to make mistakes. Therefore, low-status contestants may attract more evaluations if said evaluations are driven predominantly by an interest in collective benefits. On the other hand, if evaluations are driven largely by an interest in personal, competitive benefits, a low-status contestant makes for a less attractive target and hence may attract fewer evaluations. We empirically test these competing possibilities using a dataset of coding contests from Codeforces. The platform allows contestants to assess others’ submissions and improve evaluations (a collective benefit) by devising test cases (hacks) in addition to those defined by the contest organizer. If a submission is successfully hacked, the hacker earns additional points, and the target submission is eliminated from the contest (a competitive benefit). We begin by providing qualitative evidence based on semi-structured interviews conducted with contestants spanning the status spectrum at Codeforces. Next, we present quantitative evidence exploiting a structural change at Codeforces wherein many contestants experienced an arbitrary status reduction unrelated to their performance because of sudden changes to the platform’s color-coding system around contestant ratings. We show that status-loser contestants received systematically more evaluations from other contestants, absent changes in their short-run submission quality. Finally, we show that the excess evaluations allocated toward affected contestants were less effective, indicating status-driven evaluations as potentially less efficacious. We discuss the implications of our findings for managing evaluation processes in online contests.
    URI
    http://hdl.handle.net/11718/25913
    Collections
    • Journal Articles [3738]

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of IIMA Institutional RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    Statistics

    View Usage Statistics

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV