by Kip Glazer
As a school leader and educator, I am deeply committed to promoting the ethical use of artificial intelligence (AI), a responsibility that we all share as AI-embedded tools become increasingly prevalent in our school systems. I strongly advocate using interactive methods, such as leveraging games to engage learners and educators in these crucial discussions (featured in this blog post by Marlon Matilla). I believe that the active participation of both researchers and practitioners is essential, and I am proud to have contributed to this vital discussion by being a part of the Educator CIRCLS community over the years.
As I think about the ethical use of AI, I can’t help but think about the design of the AI system and how it nudges the users to behave. In his book Thinking Fast and Slow, Daniel Kahneman argues that humans have System 1 thinking that reacts quickly and instinctively, while System 2 thinking reacts more slowly and deliberately. Unfortunately, our System 1 thinking (aka impulses) often overrides our System 2 thinking when making decisions, and many companies have used this to maximize profit at the expense of the consumers. As much as technology companies tout the usefulness of AI systems, I am concerned that the rapid spread of AI is amplifying the functions of System 1 thinking at the expense of System 2 thinking. Because AI prioritizes speed and volume (similar to System 1 thinking) over quality based on careful deliberation (which is the hallmark of System 2 thinking), I am concerned that we humans will not be able to avoid the temptation of choosing the quickest answers. If you believe that is not likely, I would encourage you to consider reading Nudge: The Final Edition by Thaler and Sunstein. Thaler and Sunstein argue how choice architecture influences human behaviors. They cite Kahnmen’s work extensively to clarify that even the slightest nudge easily influences all humans and can have a significant impact.
Undoubtedly, we have made significant strides in acknowledging and discussing the potential harm of AI. Our Navigating Ethical Al: Interactive Lessons and Equitable Practices for Educators webinar is a testament to how the learning sciences field is actively responding to the growing concerns about the ethical use of AI. It’s important to note that the dangers and harm of AI often do not stem from any malicious intent by its creators. Instead, they result from our automatic thinking, reinforced by Generative AI’s (GenAI) speed. However, our increased understanding and awareness can help us navigate these challenges and somewhat mitigate potential harm.
Still, I can’t help but be concerned about the proliferation of GenAI as it seems to automate and propagate products that replicate the basest human instincts. Think about the deep fakes and their ability to go viral! Think about the students attempting to shortcut their learning. Think about the educators using GenAI tools to grade all their student work to save time. Such routine behaviors are not what we typically consider to be unethical AI use. It is our automatic thinking allowing the results generated by a biased-ridden system. Even if we are aware of the potential harm of blindly trusting GenAI’s outcome, it will be difficult to eschew such influences based on simple nudges by our general environment. Most importantly, I am concerned that classroom teachers without basic AI literacy won’t be able to guide their students in this new environment to recognize the potential harm done by GenAI because they can’t compete against System 1 thinking.
To activate System 2 thinking safely and productively, teachers need support from knowledgeable leaders who can lead them. Because we recognized the importance of school leaders in implementing Systems 2 thinking, Sofía De Jesús and I wrote Framework for AI Implementation for Administrators and Leaders in K-12 Schools when we participated in the Computer Science Teachers Association’s Equity Fellowship Program. We wanted to make it easier and more manageable for the many school leaders who feel overwhelmed by the number of currently available AI-enabled tools and the feverish sales pitches encouraging them not to leave their teachers and students behind! With the framework, we hope that they can access their System 2 thinking as they consider purchasing and implementing AI-enabled tools.
Educators need to stay vigilant against the urge to choose automation and efficiency over ethical and deliberate as further AI-embedded tools are being introduced to our school ecosystems. Now more than ever before, we must activate our Systems 2 thinking to ensure we are modeling and instilling equitable values with AI in education.
References:
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Thaler, R. H., & Sunstein, C. R. (2021). Nudge: The final edition. Yale University Press.
About the Author
Kip Glazer, Ed.D. is the proud principal of Mountain View High School. A long-time practitioner participant for CIRCLS and Educator CIRCLS, she has been sharing her school leader’s perspective on various learning technologies topics, including Artificial Intelligence implementation in schools. She has been an Equity Fellow for the Computer Science Teachers Association and a member of the EngageAI Practitioner Board.
How to cite this work
CIRCL Educator posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: "Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)."
Suggested citation format: [Authors] ([Year]). [Title]. CIRCLEducators Blog. Retrieved from [URL]