Designing Fair AI for Managing Employees in Organizations: A Review, Critique, and Design Agenda
dc.contributor.author | Robert, Lionel + "Jr" | |
dc.contributor.author | Pierce, Casey | |
dc.contributor.author | Morris, Liz | |
dc.contributor.author | Kim, Sangmi | |
dc.contributor.author | Alahmad, Rasha | |
dc.date.accessioned | 2020-02-20T20:47:28Z | |
dc.date.available | 2020-02-20T20:47:28Z | |
dc.date.issued | 2020-02-20 | |
dc.identifier.citation | Robert, L. P., Pierce, C., Marquis, E., Kim, S., Alahmad, R. (2020). Designing Fair AI for Managing Employees in Organizations: A Review, Critique, and Design Agenda, Human- Computer Interaction, accepted | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/153812 | |
dc.identifier.uri | https://doi.org/10.1080/07370024.2020.1735391 | |
dc.description.abstract | Organizations are rapidly deploying artificial intelligence (AI) systems to manage their workers. However, AI has been found at times to be unfair to workers. Unfairness toward workers has been associated with decreased worker effort and increased worker turnover. To avoid such problems, AI systems must be designed to support fairness and redress instances of unfairness. Despite the attention related to AI unfairness, there has not been a theoretical and systematic approach to developing a design agenda. This paper addresses the issue in three ways. First, we introduce the organizational justice theory, three different fairness types (distributive, procedural, interactional), and the frameworks for redressing instances of unfairness (retributive justice, restorative justice). Second, we review the design literature that specifically focuses on issues of AI fairness in organizations. Third, we propose a design agenda for AI fairness in organizations that applies each of the fairness types to organizational scenarios. Then, the paper concludes with implications for future research. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Human-Computer Interaction | en_US |
dc.subject | artificial intelligence | en_US |
dc.subject | artificial intelligence fairness | en_US |
dc.subject | artificial intelligence bias | en_US |
dc.subject | artificial intelligence unfairness | en_US |
dc.subject | AI unfairness | en_US |
dc.subject | organizational justice theory | en_US |
dc.subject | AI fairness in organizations | en_US |
dc.subject | artificial intelligence in organizations | en_US |
dc.subject | organizational artificial intelligence | en_US |
dc.subject | distributive justice | en_US |
dc.subject | procedural justice | en_US |
dc.subject | interactional justice | en_US |
dc.subject | artificial intelligence design agenda | en_US |
dc.subject | artificial intelligence management | en_US |
dc.subject | Artificial Intelligence Autonomy | en_US |
dc.subject | Protecting Worker Privacy | en_US |
dc.subject | AI Accountability | en_US |
dc.subject | AI Audits and Auditability | en_US |
dc.subject | Artificial Intelligence Accountability | en_US |
dc.subject | Artificial Intelligence Audits and Auditability | en_US |
dc.subject | Equity vs. Equality | en_US |
dc.subject | algorithmic fairness | en_US |
dc.subject | algorithmic management | en_US |
dc.subject | fair algorithms | en_US |
dc.subject | algorithmic bias | en_US |
dc.subject | Transparent artificial intelligence | en_US |
dc.subject | Artificial Intelligence Explainability | en_US |
dc.subject | Artificial Intelligence Interpretability | en_US |
dc.subject | Artificial Intelligence literature review | en_US |
dc.title | Designing Fair AI for Managing Employees in Organizations: A Review, Critique, and Design Agenda | en_US |
dc.type | Article | en_US |
dc.subject.hlbsecondlevel | Information and Library Science | |
dc.subject.hlbtoplevel | Social Sciences | |
dc.description.peerreviewed | Peer Reviewed | en_US |
dc.contributor.affiliationum | Information, School of | en_US |
dc.contributor.affiliationum | Robotics Institute | en_US |
dc.contributor.affiliationumcampus | Ann Arbor | en_US |
dc.description.bitstreamurl | https://deepblue.lib.umich.edu/bitstream/2027.42/153812/4/AI Fairness Final to Online Feb 24 2020.pdf | |
dc.description.bitstreamurl | https://deepblue.lib.umich.edu/bitstream/2027.42/153812/1/AI Fairness Final to Online Feb 21 2020.pdf | |
dc.description.bitstreamurl | https://deepblue.lib.umich.edu/bitstream/2027.42/153812/6/Robert et al. 2020 AI Fairness New Proof.pdf | |
dc.identifier.doi | 10.1080/07370024.2020.1735391 | |
dc.identifier.source | Human-Computer Interaction | en_US |
dc.identifier.orcid | 0000-0002-1410-2601 | en_US |
dc.description.filedescription | Description of AI Fairness Final to Online Feb 24 2020.pdf : Update Preprint Feb 24 2020 | |
dc.description.filedescription | Description of AI Fairness Final to Online Feb 21 2020.pdf : Preprint | |
dc.description.filedescription | Description of Robert et al. 2020 AI Fairness New Proof.pdf : Corrected Proof Mar 1 2020 | |
dc.identifier.name-orcid | Robert, Lionel P.; 0000-0002-1410-2601 | en_US |
dc.owningcollname | Information, School of (SI) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.