Green lights, grim signs and guard rails: what does 'ethical' AI mean in practice?

Over the past few years, designing 'ethical' Artificial Intelligence (AI) and all that might encompass has captivated discussions across disciplines. From embedding ethical principles in machines, to expectations of humans designing machines ethically, a range of routes to more ethical AI have been proposed.

What does this mean in practice? What should look and feel different for humans interacting with autonomous systems in high stakes environments? How will our practices as an industry change? What does it mean to be a 'trusted' or 'ethical' system?

Ellen considers emerging mechanisms of assurance, auditing and transparency in autonomous system design, drawing lessons from other science and engineering disciplines.

 

Biography

Ellen Broad returned to Australia from the UK in late 2016, where she was Head of Policy for the Open Data Institute (ODI), an international non-profit founded by Sir Tim Berners-Lee and Sir Nigel Shadbolt. While in the UK Ellen was also ministerial adviser on data to senior UK cabinet minister Elisabeth Truss. She has held roles as Manager of Digital Policy and Projects for the International Federation of Library Associations and Institutions (Netherlands) and Executive Officer for the Australian Digital Alliance, and is currently Head of Technical Delivery, Consumer Data Standards for CSIRO's Data61.

She is a member of the Australian government's Data Advisory Council and author of Made by Humans: the AI Condition (Melbourne University Publishing, 2018) and has written about data for publications including The Guardian, New Scientist and Griffith Review. A board game about data she created with Jeni Tennison, CEO of the Open Data Institute, is being played in 19 countries.

Date & time

12–1pm 25 Sep 2019

Location

Room:Seminar Room 1.33

Event series

Updated:  1 June 2019/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing