Show simple item record

Utilizing a designed framework to analyze the ethics and bias of the outward impact of in use AI systems on minority populations and recommend regulatory corrections

dc.contributor.authorSimon, Elisa
dc.contributor.advisorRider, Christopher
dc.date.accessioned2024-10-24T14:44:41Z
dc.date.available2024-10-24T14:44:41Z
dc.date.issued2024-08-16
dc.identifier.urihttps://hdl.handle.net/2027.42/195351
dc.description.abstractAs the number of deployed AI inventions continues to increase rapidly, regulators and the public begin to worry about the ways to ensure that these new technologies are kept in check. The rate at which these technologies are being deployed and improved is much faster than the current regulatory systems, creating a gap in oversight, accountability, and trust with these new technologies. This gap in regulation poses significant risks, including the potential for biased or flawed designs, unethical usage, and societal harm. This paper chooses to focus on how bias can enter the design of an AI system. Bias is defined here as when the output of the AI veers away from the ethically perceived baseline. When it comes to bias entering a system, there are 4 main areas where this can occur: 1) bias in the data whether parties are misrepresented or not represented at all, 2) bias in the algorithm itself that leads to unequitable decisions, 2) bias in the designers of the AI technology, and 4) bias in the way the AI is implemented or deployed into society. Through a simplified two-by-two framework, AI technologies can be separated by the root causes of their biases. This paper will analyze 4 different published cases where implemented and well-researched AI tools have biases within their data (whether it be explicit, skewed, or proxy discrimination) or their algorithmic design. From this, common patterns are summarized into general regulatory actions at the governmental, third-party, and business levels. This paper finds that the industry of AI requires stricter, more clearly defined standards at the governmental level, enforcement of these expectations at the third-party accreditation level, and ensured action at the soft-law business level to ensure that AI is used ethically within our society.
dc.subjectdata bias
dc.subjectartificial intelligence
dc.subjectcase study analysis
dc.titleUtilizing a designed framework to analyze the ethics and bias of the outward impact of in use AI systems on minority populations and recommend regulatory corrections
dc.typeProject
dc.subject.hlbtoplevelEngineering
dc.contributor.affiliationumCollege of Engineering Honors Program
dc.contributor.affiliationumRoss School of Business
dc.contributor.affiliationumThomas C. Kinnear Professor and Associate Professor of Entrepreneurial Studies
dc.contributor.affiliationumcampusAnn Arbor
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/195351/1/elisasmn_finalreport_SU24.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/195351/2/elisasmn_poster_SU24.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/24547
dc.working.doi10.7302/24547en
dc.owningcollnameHonors Program, The College of Engineering


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.