fbpx
  • Start Time:
    10:20AM
  • End Time:
    10:40AM
  • Day:
    Day 2

Talk:

  • Classical fallacy: just because my model does not use gender (or any other protected attribute) it is not enough to guarantee fairness on those attributes
  • Ways to introduce unfairness in an unintended way – the machines do exactly what you instruct them, no more, no less
  • Explainalabilty and fairness – Explaining a model does not guarantee fairness- (“Fairwashing”)
  • Metrics and Trade-off’s in designing robust, explainable and faired algorithms- Pareto curves and the cost of fairness
  • The future of fairness

On-Demand Presentation

Associated Speakers:

Javier Campos

Head of DataLabs

Experian

Associated Talks:

10:20AM - Day 2

View Presentation: Fairness & Explainability in AI

View Full Info