Educ-AI-tion Week: Generative Artificial Intelligence (AI) Literacy

 By Andrew Cox

What are the essential skills we need to have to use generative AI effectively and ethically – what you might want to call generative AI literacy?


One commonly mentioned dimension of AI skills is knowing how to use prompts effectively to get the kind of answer one wants. Formulating well defined prompts can ensure you get a more useful answer, such as a bullet point summary of an article's findings, rather than a 500 word summary of the whole text.

Good prompting can really matter. For example, generative AI has been trained to be very polite. If you show it a picture of a dog and assert that it is a cat, there is a fair chance that it will confirm that it is a cat. So how we prompt AI to be direct with us is important.

So a lot of AI literacy talk is about prompting or "prompt engineering".

Prompting is important, but I want to focus in this blog on three other dimensions, that are less discussed.

Being reflective

An important dimension of generative AI skills is the need to be reflective about its impact on ourselves as learners.

There have been a number of studies that have investigated the impact of using generative AI on the effectiveness of learning, and its indirect effects such as our social engagement in learning. 

I have heard a lot of people say that AI just makes them more efficient. But there may be some hidden costs to that efficiency.

It may be more "efficient" to get Gemini to summarise a paper or explain a concept for us. But what depth of learning are we losing when we don't struggle through the hard task of reading the paper for ourselves, and through that process learn how to be better at reading?

It's great that AI can redraft sentences for us to write in a more academic style. But are we losing the skill to write better ourselves and coming to rely too much on the technology? 

It may be "efficient" to ask Gemini rather than a teacher or peers to explain a concept that is difficult to understand. But what are we losing in terms of building up the community of learning around us? Yes, Gemini, is always on and answers very rapidly, but discussing ideas with peers is one of the most productive parts of learning at university. 

I think we need to be reflective about our use of new technologies like generative AI and think honestly about how our uses impact our learning.

Generative AI is a great support to learning. We should be taking time to discover how to use well. But we need to be alert to some of the less obvious impacts on our learning experience. 




Being critical

Another discourse we hear a lot around generative AI is that its "just a tool". Implying that it is in itself neutral; its just a case of how you choose to use it. People often also talk about it being a "free tool."

But generative AI aren't just tools, they are systems built by BigTech companies to make profit.

We need to ask questions about the nature of these companies and how they operate. In Crawford's brilliant book the Atlas of AI she deconstructs the inherently extractive and exploitative nature of the AI industry. 

We can point to:
  1. The way our personal data is extracted for profit, as with social media.
  2. The way BigTech has also appropriated content on the internet to train its model. 
  3. The lack of diversity in the BigTech workforce which leads to it creating services riddled with bias.
  4. The exploitative way the companies used cheap and precarious labour in countries like Kenya to do the unpleasant work of filtering out toxic content when training their Large Language Models.
  5. The way AI has been developed with little regard for the impact on human work, eg in the creative industries.
  6. The environmental impacts of AI. Google and Microsoft have recently acknowledged failing against their carbon emission targets precisely because of AI. Image generation is a particularly resource intensive use.

There is a pattern here. It is not about the technology of AI per se, but about the business model of the companies offering the technology.

So there are strong ethical question marks over some of these AI services, especially ChatGPT. Our choices about whether to use these "free tools" should be informed by an awareness of their widest social impacts.




Being context-aware

A third thought about essential AI skills relates to context.

There are many great and creative uses of generative AI, especially in learning. When we are exploring ideas at university using lots of services like search engines, library searches and generative AI can feed into the richness of our experience. I think generative AI is at its best when we are using it to bounce our own ideas off. University is the place to explore in this way.

However, this may play out differently in the work context. 

We all know not to entirely rely on answers from generative AI, but in many work contexts it's simply not tolerable to have any level of inaccuracy. For example, if you are working in a legal practice or in a healthcare setting the answers have to be right. So we need to think about how the context shapes whether generative AI should be used. 

I think a lot of employers are interested in generative AI but remain sceptical because of issues of accuracy.

We probably all think that AI skills are going to be key to employability going forward, but the definition of appropriate use will be highly contextual.

Conclusion

The use of Generative AI in education is highly controversial for good reason. 

We need to develop a reflective, critical and context aware approach to using it.  


References
For more thoughts about AI literacy you might care to read:

Zhao, X., Cox, A., & Cai, L. (2024). ChatGPT and the digitisation of writing. Humanities and Social Sciences Communications, 11(1), 1-9.


Dr Andrew Cox is a Senior Lecturer at the Information School