Utilizing a designed framework to analyze the ethics and bias of the outward impact of in use AI systems on minority populations and recommend regulatory corrections
dc.contributor.author | Simon, Elisa | |
dc.contributor.advisor | Rider, Christopher | |
dc.date.accessioned | 2024-10-24T14:44:41Z | |
dc.date.available | 2024-10-24T14:44:41Z | |
dc.date.issued | 2024-08-16 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/195351 | |
dc.description.abstract | As the number of deployed AI inventions continues to increase rapidly, regulators and the public begin to worry about the ways to ensure that these new technologies are kept in check. The rate at which these technologies are being deployed and improved is much faster than the current regulatory systems, creating a gap in oversight, accountability, and trust with these new technologies. This gap in regulation poses significant risks, including the potential for biased or flawed designs, unethical usage, and societal harm. This paper chooses to focus on how bias can enter the design of an AI system. Bias is defined here as when the output of the AI veers away from the ethically perceived baseline. When it comes to bias entering a system, there are 4 main areas where this can occur: 1) bias in the data whether parties are misrepresented or not represented at all, 2) bias in the algorithm itself that leads to unequitable decisions, 2) bias in the designers of the AI technology, and 4) bias in the way the AI is implemented or deployed into society. Through a simplified two-by-two framework, AI technologies can be separated by the root causes of their biases. This paper will analyze 4 different published cases where implemented and well-researched AI tools have biases within their data (whether it be explicit, skewed, or proxy discrimination) or their algorithmic design. From this, common patterns are summarized into general regulatory actions at the governmental, third-party, and business levels. This paper finds that the industry of AI requires stricter, more clearly defined standards at the governmental level, enforcement of these expectations at the third-party accreditation level, and ensured action at the soft-law business level to ensure that AI is used ethically within our society. | |
dc.subject | data bias | |
dc.subject | artificial intelligence | |
dc.subject | case study analysis | |
dc.title | Utilizing a designed framework to analyze the ethics and bias of the outward impact of in use AI systems on minority populations and recommend regulatory corrections | |
dc.type | Project | |
dc.subject.hlbtoplevel | Engineering | |
dc.contributor.affiliationum | College of Engineering Honors Program | |
dc.contributor.affiliationum | Ross School of Business | |
dc.contributor.affiliationum | Thomas C. Kinnear Professor and Associate Professor of Entrepreneurial Studies | |
dc.contributor.affiliationumcampus | Ann Arbor | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/195351/1/elisasmn_finalreport_SU24.pdf | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/195351/2/elisasmn_poster_SU24.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/24547 | |
dc.working.doi | 10.7302/24547 | en |
dc.owningcollname | Honors Program, The College of Engineering |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.