logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

The Impact Of Human Oversight On Discrimination In Aisupported Decisionmaking Alexia Gaudeul

  • SKU: BELL-232073014
The Impact Of Human Oversight On Discrimination In Aisupported Decisionmaking Alexia Gaudeul
$ 31.00 $ 45.00 (-31%)

4.7

16 reviews

The Impact Of Human Oversight On Discrimination In Aisupported Decisionmaking Alexia Gaudeul instant download after payment.

Publisher: European Union
File Extension: PDF
File size: 4.97 MB
Pages: 110
Author: Alexia Gaudeul, Ottla Arrigoni, Vicky Charisi, Marina Escobar-Planas, Isabelle Hupont
ISBN: 102760/0189570
Language: English
Year: 2025

Product desciption

The Impact Of Human Oversight On Discrimination In Aisupported Decisionmaking Alexia Gaudeul by Alexia Gaudeul, Ottla Arrigoni, Vicky Charisi, Marina Escobar-planas, Isabelle Hupont 102760/0189570 instant download after payment.

This large-scale study assesses the impact of human oversight on countering discrimination in AI aided decision-making for sensitive tasks. We use a mixed research method approach, in a sequential explanatory design whereby a quantitative experiment with HR and banking professionals in Italy and Germany (N=1411) is followed by qualitative analyses through interviews and workshops with volunteer participants in the experiment, fair AI experts and policymakers. We find that human overseers are equally likely to follow advice from a generic AI that is discriminatory as from an AI that is programmed to be fair. Human oversight does not prevent discrimination when the generic AI is used. Choice when a fair AI is used are less gender biased but are still affected by participants' biases. Interviews with participants show they prioritize their company's interests over fairness and highlights the need for guidance on overriding AI recommendations. Fair AI experts emphasize the need for a comprehensive systemic approach when designing oversight systems.

Related Products