5.12 Wrapping up Module 5
Congratulations on completing Module 5. ✨
Barely a day goes by without a new article in the popular press and media about the ethics in AI. The final module, Module 5, focuses on ethical concerns with Artificial Intelligence. While there has been plenty of articles in the press and media about ethics and AI in the last few years, it is less clear what the implications are for education and training and more purposefully, what can be done about it.
In this module, we examine general ethical issues associated with AI. These include bias, particularly through the data used for training AI and from the labeling of that data. By now there is a wide range of examples available on how algorithms can reflect the bias from training data (which in turn may reflect more widespread biases within society). Large training datasets may not only fail to reflect the diversity of societies, but also are not sufficiently agile to capture changing social movements, for instance the impact of the Black Lives Matter campaign).
A further problem is the lack of transparency of many algorithms, for example in FinTech applications in the banking and finance industry.
Timnit Gebru, a leading ethics researcher who works on algorithmic bias and data mining was controversially sacked by Google after criticizing the organisation in a paper. She says ethical choices cannot be left to AI: rather an observer system is needed to guarantee accountability, transparency, fairness and honesty. She is an advocate for diversity in technology and co-founder of Black in AI, a community of black researchers working in artificial intelligence.
In Vocational Education and Training, technological innovation does not always equate to social progress, and education has long raised concerns over equity. There are issue of access, controversy over the increasing use of online proctoring applications and the question of bias has been raised around the use of AI-based programs for teacher recruitment. Widespread concerns have been expressed over surveillance of students.
The development of AI is leading to a technological revolution. This raises the question of whether we can socially shape the evolution of technology. This is not a new question – information technology can already be used in different socio-technological ways. AI can lead to the centralisation of power and control or for developing a democratic society. It can erode or support privacy. It could be used to provide better education and health systems. But people need be aware of the way technology operates and regulate its use.