In light of recent changes to the AI landscape, it is timely to reflect upon current assessment and pedagogical practices.
It is clear AI is here to stay and is infiltrating software we and our students use daily – the most recent addition of AI to SnapChat is evidence of its proliferation. Its use is not inherently problematic, however its place in learning requires critical conversations and considered planning.
When considering the implications of AI on teaching and learning, Brisbane Grammar School teacher, Bridget Pearce suggests revisiting our purpose as educators first. Pearce reflects “everything I do is to elevate all students to optimise their impact on the world” and anything that does not align with that is superfluous. The ways in which we engage with AI in the classroom must be strategic and with students’ deep learning in mind.
Academic Integrity and Girls’ Education
While the issue of authenticity in student assessment is not new, AI brings about new challenges in part due to the increased sophistication and availability of AI-generated content. It is therefore appropriate to reflect on the mechanisms that promote academic dishonesty.
Interestingly, low stress resistance, low levels of self-efficacy, ineffective achievement goals and outcome expectations have been found to contribute more greatly to the likelihood of cheating than academic literacy (Pekovic et al., 2020; Ramberg, Modin, 2019; Sarmiento and Manaloto, 2018). Kaczmarek and Trambacz-Oleszak (2021) found that adolescent girls were over three times more likely than boys to experience high levels of perceived stress associated with school, which has severe implications for both academic and personal wellbeing. Students experiencing these states may be more likely to rely on AI to alleviate stress or to compensate for a lack of confidence in navigating the task. Furthermore, lack of engagement, specifically cognitive and emotional engagement, can lead to lower levels of motivation and academic dishonesty (Putarek & Pavlin-Bernardic, 2019).
The literature suggests girls’ responses to academic dishonesty are more strongly related to moral values (Ramberg & Modin, 2019). Girls generally acknowledge that cheating is morally wrong, however many still engage in this behaviour. Malik et al. (2023) found 34.5% of girls self-reported cheating on assessment in high school. Some studies suggest this number could be as high as 80% in different contexts. How can girls’ schools leverage moral reasoning to reinforce the principles of academic integrity with a stronger focus on what it means to operate with integrity? Roberts (2021) suggests educators might consider ways to help ‘shape students to become ethical individuals who have the capacity to make moral decisions’ (p. 50). Creating authentic assessment, giving students a voice, increasing agency, providing clear policies and holding students accountable are all integral in increasing engagement and building communities of ethical learners.
Assessment is a ‘crucial pedagogical instrument’ that measures and enhances learning (Masuku, 2021, p. 274). The assessment phase of learning should come at a point of knowledge fluency and should be an authentic reflection of the students’ abilities. Assessment instruments can then accurately reveal students’ proficiency at a point in time and can inform teachers’ subsequent planning.
AI can quickly and easily answer many assessment tasks, which heightens the urgency for educators to reflect on current practices to identify any instruments that may be steeped in superficial learning. Assessment that can be answered wholly with AI encourages passive engagement, as students are motivated to simply cope with the task or tick the box [surface learning] rather than being motivated to learn [deep learning]. More than ever, we must continue to reflect on how to move students along the continuum from reproducing information [something which AI is very gifted at] to transforming knowledge (Entwistle, 2000). This transformance of knowledge requires deep learning mindsets where students engage with higher-order thinking skills such as analysis, evaluation and synthesis, to solve complex or layered problems. This represents a paradigm shift in pedagogy and assessment from learning to think to thinking to learn as posited by Ritchhart et al. (2011) and Fullan et al. (2018).
Characteristics of Assessment
Beyond QCAA’s (2023) six principles and three characteristics of high-quality assessment, to adapt to the AI landscape, teachers might also consider three Cs – challenge, currency, and context.
Is the assessment: Challenging? Current? Contextual?
- Challenging: Assessment that requires students to engage with multiple higher-order thinking skills provides opportunities for deep learning to be demonstrated and makes an AI-generated response difficult.
- Current: Assessment that utilises recent stimulus, current events or hot issues can enhance student connection with the topic and build their cultural capital. It is also difficult (although not impossible) for ChatGPT to generate responses based on information published in the last two years.
- Contextual: Assessment that is authentic and relevant to the students’ context connects students to the third space. This is the connected learning space between a students’ personal life and school life, where engagement is enlivened and relevance is heightened (Maniotes, 2005). This form of assessment also moves beyond factual recall into the inquiry realm, which is more difficult for AI to respond to.
By creating a participatory learning environment and connecting the learning material to students’ own experiences and their world, students are more likely to be engaged and invested in the learning process. If student engagement is high, reliance on AI may be reduced.
Schools must determine a definition of acceptable use of AI. This should be an agreement between teachers and students, and between faculties. Outlining when it is and is not appropriate to use AI is vital for transparency and for teaching students how to ethically engage with a technology that will continue to shape their worlds.
To gauge teachers’ perceptions of acceptable use and to prompt the dialogue, Pearce suggests we reflect on the following scenarios:
- Is it acceptable use if a student has completed the thinking and writing for their task, however they ask ChatGPT to correct their grammar?
- Is it acceptable use if students ask ChatGPT to make their writing more concise to reduce their word count from 1500 to 1000 words?
- Is it acceptable use if a student asks ChatGPT to give five possible structures for their assessment before they start writing?
- Is it acceptable use if a student asks GPT for 20 possible article titles on their research topic and they pick one that is used in their final assessment?
It is important that teacher librarians are also part of these conversations. When determining acceptable use, it may be worth considering the phases of inquiry or information search behaviour. It is often arduous for students to identify who the seminal academics are for a given topic or which is the key primary source for an ancient civilisation. Is it acceptable for AI to assist students with locating and selecting these types of resources so that more time is spent on the skills that lead to knowledge creation? As inquiry and information experts, teacher librarians are positioned to provide guidance on the behaviours and processes that occur during an inquiry and searching task. This insight can lead to robust professional conversations about the place of AI in teaching and learning.
Teacher librarians play a pivotal role in supporting teachers and students to evaluate learning tools and sources of information. During the recent ALIA (2023) webinar AI, libraries, and the changing face of information literacy, Fiona Bradley stressed that information literacy is one of the most critical skills of our time. While AI-generated content can provide a range of benefits from lesson and unit planning to student ideation, it also presents ethical and accuracy challenges. As proposed by Dr Kay Oddone during the webinar (ALIA, 2023), under the umbrella of information literacy, we might now see algorithmic literacy sit alongside media literacy, digital literacy and critical literacy. However, the true implications of AI on these literacies are yet to surface. In the case of ChatGPT, it is not infallible therefore fact checking is integral. While the ChatGPT information database is vast, it is still limited as it is not yet fully web integrated, therefore it can and does generate factually inaccurate and bias or misleading information. However, this is not far away. Microsoft’s Bing AI chatbot is already demonstrating the capacity for AI to draw upon information from the web to build responses.
Caulfield’s (2017) SIFT method is a highly effective technique for evaluating information through a metacognitive lens. While it is difficult to evaluate the original source of ChatGPT’s information, steps 1 and 3 of SIFT, Stop and Find, are crucial. These mindsets encourage students to think critically about the content they are exposed to by promoting information fluency behaviours such as fact checking and lateral reading.
- Stop: Before you engage with any content, take a moment to stop and evaluate what you are seeing.
- Investigate the source: Look for information about the source of the content, such as the author or publisher, to assess their credibility and potential biases.
- Find trusted coverage: Try to find multiple sources that cover the same topic to get a broader perspective and cross-reference the information.
- Trace claims, quotes, and media back to the original context: Check if the claims, quotes, or media have been taken out of context or manipulated in any way.
While there is a lot of uncharted water ahead for educators, our core principles hold true. When navigating the emerging and quickly evolving AI-landscape, continue to keep thinking at the centre of what we do and what we teach.