当前位置:仁行文秘网>范文大全 > 公文范文 > AI伦理:公平与可解释机器学习——与IBM投资者座谈会摘要

AI伦理:公平与可解释机器学习——与IBM投资者座谈会摘要

时间:2022-06-21 10:20:03

下面是小编为大家整理的AI伦理:公平与可解释机器学习——与IBM投资者座谈会摘要,供大家参考。

AI伦理:公平与可解释机器学习——与IBM投资者座谈会摘要

 

 Ethical

 AI

 -

 Fair

 &

 Explainable Machine

 Learning

 DeepFin

 Investor

 Workshop

 Summary

 with

 IBM

 Global Quantitative & Derivatives Strategy 18

 September

 2020

 In this research note we summarise our September 2020 DeepFin Investor

 Tutorial on Fair and Explainable AI with IBM held over video conference from London. As Machine Learning and AI continue to proliferate, we explore how to remove unfair bias in AI and how to engender trust through explainable AI models.  What is Machine Learning and AI? Machine Learning by its nature is a way of statistical discrimination. The discrimination becomes objectionable when it places privileged groups at a systematic advantage and certain unprivileged groups at a systematic disadvantage. Extensive evidence has shown that AI can embed human and societal bias (e.g. race, gender, caste, and religion) and deploy them at scale, consequently many algorithms are now being re-examined due to illegal bias.  Trusted AI – how can we trust AI? Fundamental questions arise around how can we trust AI? And, to what level of trust can, and should we, place on AI? Forming the basis of trusted AI systems, IBM introduces four pillars: 1) Explainability: knowing and understanding

 how AI models arrive at specific decisions; 2) Fairness: removing / minimising bias in the model or data; 3) Robustness: ensuring the model is safe and secure; and 4) Lineage: as models are continually evolving, we should track and maintain the provenance of data, metadata, models and test results. See cover chart. To remove unfair bias in Machine Learning, we can intervene before the models

 are built (pre-processing algorithms), to a model during training (in-processing algorithms), or to the predicted labels (post-processing algorithms).  Removing unfair bias in Machine Learning and explaining AI models During the practical sections of the workshop, we used Python packages from IBM in the IBM Cloud to remove unfair bias from an AI pipeline, and to help explain machine learning model predictions and data. Removing Unfair Bias in Machine Learning: using German Credit Risk data, we measured bias in the data and models, and applied a fairness algorithm to mitigate bias. By having access to the training data, we used a pre-processing algorithm to remove bias (age >

 25 years): there was subsequentlynodifference in the rate of favourable outcomes received by the unprivileged group to the privileged group. Explain Machine Learning Models: how can we explain model predictions? We worked through explaining the Iris dataset predictions using SHAP, explaining German Credit Risk predictions using LIME, explain Proactive Retention decisions using TED, and analysed and explained CDC Income Data using ProtoDash – we explain each of these practical approaches in more detail within the research note. Figure 1: Four Pillars of Trusted AI Pillars of trust, woven into the lifecycle of an AI application

  Big

 Data

 &

 AI

 Stategies Ayub

 Hanif,

 PhD

 AC

 (44-20)

 7742-5620

 ayub.hanif@jpmorgan.com

 Bloomberg

 JPMA

 HANIF

 <GO>

 J.P.

 Morgan

 Securities

 plc

 Khuram

 Chaudhry

 AC

 (44-20)

 7134-6297

 khuram.chaudhry@jpmorgan.com

 Bloomberg

 JPMA

 CHAUDHRY

 <GO>

 J.P.

 Morgan

 Securities

 plc

 Global

 Head

 of

 Quantitative

 and Derivatives

 Strategy

 Marko

 Kolanovic,

 PhD

 AC

 (1-212)

 622-3677

 marko.kolanovic@jpmorgan.com

 J.P.

 Morgan

 Securities

 LLC

 Source:

 J.P.

 Morgan

 Quantitative

 and

 Derivatives

 Strategy,

 IBM

 See page 20 for analyst certification and important disclosures, including non-US analyst disclosures. J.P. Morgan does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that the firm may have a conflict of interest that could affect the objectivity of this report. Investors should consider this report as only a single factor in making their investment decision. www.jpmorganmarkets.com

  Table of Contents Ethical AI – Fair & Explainable Machine Learning

 .................. 3 Removing Unfair Bias in Machine Learning

 ............................. 6 Explore bias in the data

 ...............................................................................................7 Exploring bias metrics

 .................................................................................................7 Select and transform features to build a model

 ............................................................7 Build models

 ...............................................................................................................8 Remove bias by reweighing data

 .................................................................................8 Explain Machine Learning Models

 ............................................ 10 Understanding model predictions with SHAP

 ........................................................... 11 Understanding model predictions with LIME

 ........................................................... 13 Understanding model predictions with TED

 ............................................................. 14 Understanding data with ProtoDash

 .......................................................................... 16 Takeaways

 ....................................................................................... 19

 Ethical AI – Fair & Explainable Machine Learning

 With

 increased

 proliferation

 / ubiquity

 of

 AI,

 comes

 increased scrutiny

  How

 can

 we

 trust

 AI?

 How

 can

 we

 build

 trust

 in

 AI?

  As the hype surrounding advancements in Machine Learning and Artificial Intelligence (AI) starts to deliver, we have seen AI being increasingly used in many decisions-making applications e.g. credit, employment, admissions, sentencing and healthcare. Although Machine Learning, by its very nature, is a way of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at a systematic advantage and certain unprivileged groups at a systematic disadvantage. Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale, consequently many algorithms are now being re-examined due to illegal bias: e.g. biases in training data, due to either prejudice in labels or under- over-sampling, yields models with unwanted bias 1 . The fundamental questions thus are: how can we trust an AI system? How can an AI system explain itself? How does unfair human and societal bias leak into an AI machine? IBM describe four pillars of trust, see Figure 2, forming the basis for trusted AI systems 2 . 1. Explainability: knowing and understanding how AI models arrive at specific decisions. 2. Fairness: removing / minimising bias in the model or data. 3. Robustness: ensuring the model is safe and secure. 4. Lineage: as models are continually evolving, we should track and maintain the provenance of data, metadata, models (with hyperparameters) and test results. Figure 2: Four Pillars of Trusted AI Pillars of trust, woven into the lifecycle of an AI application

 Source:

 J.P.

 Morgan

 Quantitative

 and

 Derivatives

 Strategy,

 IBM

  So, what is fairness? There are numerous definitions of fairness, many of which conflict. Consider if we were to afford positive weight to any notion of fairness through social policies may sometimes lead to reducing the well-being of every person in society 3 . Simply put, the way you define fairness impacts bias.

 1

  Barocas, Solon, and Andrew D. Selbst. "Big data"s disparate impact." Calif. L. Rev. 104 (2016): 671. 2

  Arnold, Matthew, Rachel KE Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, A. Mojsilović, Ravi Nair et al. "FactSheets: Increasing trust in AI services through supplier"s declarations of conformity." IBM Journal of Research and Development 63, no. 4/5 (2019): 6-1. 3

  Kaplow, Louis, and Steven Shavell. "The conflict between notions of fairness and the Pareto principle." American Law and Economics Review 1, no. 1 (1999): 63-77.

 The following are some fairness terms used in ethical AI:  Protected Attribute – an attribute that partitions a population into groups whose outcomes should have parity (e.g. race, gender, caste, and religion)  Privileged Protected Attribute – a protected attribute value indicating a group that has historically been at systemic advantage  Group Fairness – groups defined by protected attributes receiving similar treatments or outcomes  Individual Fairness – similar individuals receiving similar treatments or outcomes  Fairness Metric – a measure of unwanted bias in training data or models  Favourable Label – a label whose value corresponds to an outcome that provides an advantage to the recipient In Figure 3 we show example group fairness metrics. We have two groups, unprivileged and privileged and are measuring their favourable outcome rates. In the sample scenario, 6 of the unprivileged individuals have a favourable outcome, whilst there are 7 privileged individuals with a favourable outcome. We can measure fairness from a number of different perspectives.

 Statistical

 Parity

 Difference measures

 the

 difference

 in positive

 rates.

 Disparate

 Impact

 expresses

 the unprivileged

 positive

 outcomes in

 relation

 to

 the

 privileged positive

 outcomes.

 Equal

 Opportunity

 Difference measures

 the

 difference

 in

 true positives

 and

 false

 negatives

 in the

 privileged

 /

 unprivileged groups.

  Mitigate

 often,

 mitigate

 early

 Figure 3: How to measure fairness? Some group fairness metrics

  Source:

 J.P.

 Morgan

 Quantitative

 and

 Derivatives

 Strategy,

 IBM

  Where can we intervene in the AI pipeline to mitigate bias? If we can modify the training data, then we use a pre-processing algorithm. If we can modify the learning algorithm, then an in-processing algorithm can be used. And if, we can only treat the learned model as a black-box and cannot modify either the training data or learning algorithm, then a post-processing algorithm can be used.

 Figure 4: Where can you intervene in the AI pipeline?

  Source:

 J.P.

 Morgan

 Quantitative

 and

 Derivatives

 Strategy,

 IBM

 Need

 to

 tradeoff

 between

 the bias

 /

 accuracy

 of

 the

 model

 with respect

 to

 yo...

推荐访问:AI伦理:公平与可解释机器学习——与IBM投资者座谈会摘要 座谈会 伦理 投资者

声明:本网站尊重并保护知识产权,根据《信息网络传播权保护条例》,如果我们转载的作品侵犯了您的权利,请在一个月内通知我们,我们会及时删除。

Copyright©2024 仁行文秘网版权所有 备案号:苏ICP备16062786号-1