5.1 Introduction to Module 5
Image: Maxim Hopkin – Unsplash
The final module, Module 5, focuses on ethical concerns with Artificial Intelligence. While there has been plenty of articles in the press and media about ethics and AI in the last few years, it is less clear what the implications are for education and training and, more purposefully, what can be done about it.
The first unit, 5.2, asks why are ethical issues seen as so important with the development of AI? One of the major issues is bias. Algorithms will always have some bias, but possibly more problematic is that training data reflects existing biases, including the labeling that data. Examples include photo colouring applications and face recognition software.
Unit 5.3 continues the exploration of concerns over ethical issues. It provides the example of widely used algorithms in the financial technology industry providing recommendation on loans and financial support for individual customers. Not only may there be bias in these algorithms but how the systems work are not transparent.
Ethical choices cannot be left to AI: rather an observer system is needed to guarantee accountability, transparency, fairness and honesty.
Unit 5.4 provides a case study. Timnit Gebru, a leading ethics researcher who works on algorithmic bias and data mining was controversially sacked by Google after criticizing the organisation in a paper. She says ethical choices cannot be left to AI: rather an observer system is needed to guarantee accountability, transparency, fairness and honesty. She is an advocate for diversity in technology and co-founder of Black in AI, a community of black researchers working in artificial intelligence. She had drawn attention to the issue that text data for training Ais is frequently racist, violent or sexist.
Other ethical concerns associated with AI include their failure to take into account social movements such as Black Lives Matter, the well publicized biases in facial recognition (with more errors with people of colour and women than with white men) and the carbon footprint of developing AI apps.
Unit 5.5 examines how ethical issues around AI affect Vocational Education and Training. Technological Innovation does not always equate to social progress, and education has long raised concerns over equity. There are issue of access, controversy over the increasing use of online proctoring applications and the question of bias has been raised around the use of AI-based programs for teacher recruitment. Widespread concerns have been expressed over surveillance of students.An important issue is equality. According to UNESCO, only 12 percent of AI developers are women. Less than 2% of employees in technical roles at Facebook and Google are black. Little wonder then that in many applications, AI is seen as biased towards white men.
The unit features a interview with Dr.-Ing. Fereshta Yazdani about what AI can and can not do and data bias.
Unit 5.6 explores how we can teach about the issues of AI and bias. It links to Explore AI Ethics, a curated directory of educational resources for teaching and learning about the ethics of artificial intelligence. Organized and searchable by both topic and category, each page contains either a short excerpt from the resource, a press release, a video, or in cases where permission has been granted, the full text.
Unit 5.9 asks how can we socially shape technology? The development of AI is leading to a technological revolution. This raises the question of whether we can socially shape the evolution of technology. This is not a new question – information technology can already be used in different socio-technological ways. AI can lead to the centralisation of power and control or for developing a democratic society. It can erode or support privacy. It could be used to provide better education and health systems. But people need be aware of the way technology operates and regulate its use.