Category Archives: Ethical AI

The Convenience of Inconvenience: Exploring Connections with Researchers and Practitioners

AI letters surrounded by electronic cords
Image by Steve Johnson on Unsplash
by Nneka McGee and Krystal Chatman

The goals of change are aspirational. In the artificial intelligence (AI) era, we see change as a catalyst that will bring about the true transformation in education many of us have sought for decades. We all agree that change is necessary, right? Therefore, it’s only a matter of time before aspirations become concrete. Though we should continue to strive for the best ideas, there is one reality we must face: The difficulty of change is an inconvenient truth.

While change can be a challenge, particularly in education, hope is still a constant. Artificial intelligence and other emerging technologies have potential, but what are optimal ways to incorporate them into learning environments? Researchers have long explored the questions and possible answers of improving educational outcomes. Practitioners apply real-world experiences that significantly impact learners. The premise of bringing these disparate groups together offers a promising solution to understanding change.

In summer and fall of 2024, the Center for Integrative Research in Computing and Learning Sciences (CIRCLS) facilitated a convening of researchers and practitioners to explore AI problems of practice in PreK-12 education (for further details, see Educator CIRCLS ’24 Summer Series). An innovative act in itself, as it is not often the explorers of educational theory (researchers) are provided with the opportunity to collaborate with the facilitators (teachers) of the suggested strategies. This convening offered an opportunity for both sides to engage at a deeper level, gaining insight into how the “other side” operates to improve their practice. Through discussions, researchers had the opportunity to convey how valuable the experiences of practitioners are to the work of advancing innovation in education. Practitioners shared their interest in research areas such as generative AI and expressed their concerns about helping students navigate the future of learning.

Problems of Practice

Dialogue was a primary focus of the convening. To bridge the distance between practice and research, CIRCLS facilitators divided attendees into groups with the goal of centering conversations on what practitioners experience within their educational ecosystems. Through in-depth discussions, researchers and practitioners identified areas of focus by posing questions such as:

  • Do we need multiple literacies?
  • How do we address accessibility issues?
  • How do we use AI tools and what are the benefits?
  • How do we include others across the spectrum to include more voices in developing AI tools?
  • How can we use AI to support educator coaching?

AI literacy was a common thread among the groups. There was a collective acknowledgement of the limitations surrounding this emerging technology given the current state of training at educational institutions. Without the appropriate requisite foundations on how to use AI effectively, its potential to transform education is unclear. Additional conversations explored the limitations practitioners face with utilizing or implementing AI tools due to policies and how use of AI in learning spaces cannot swiftly progress until practitioner decision-makers gain more knowledge and confidence in AI use. Although researchers may be working on other areas, the conversations provided essential insights into the needs and wonderings of practitioners.

Research to Action

During a second round of grouping, researchers also sought input on their research topics and potential impact in today’s classrooms. Recruiting research subjects has always been a challenge; therefore, informing practitioners of research topics opened avenues for feedback and questions related to applicability and feasibility. Several topics emerged from the group, including:

  • Collaborative learning
  • Student voice
  • Systemic challenges
  • AI scoring
  • Policy development
  • Navigating tensions
  • Centering pedagogy

Just as in the prior group discussions, AI literacy was a popular talking point, often interwoven within the topics listed above.

When groups reconvened, all agreed that more conversation was necessary. More action was also on the minds of researchers and practitioners, but the realities of change were ever present. Several participants expressed concern about the trajectory of AI in education, particularly when considering access, bias, safety, security, and environmental impact. Despite misgivings, there was an overall sense of optimism about the future of research and the contributions of practitioners in propelling use of emerging technologies forward.


About the Authors

Nneka J. McGee, J..D., Ed.D. is an educational consultant and former Chief Academic Officer dedicated to ensuring students have access and opportunities to explore AI and emerging technologies. She completed her doctoral studies on AI and its implementation in K-12 classrooms. She is a frequent speaker on AI topics and serves as a member of several AI-focused advisory boards and committees.

Krystal Chatman is a seasoned educator and advocate with over 15 years of experience in integrating technology into teaching and promoting equitable computer science education. Currently serving as the Professional Learning & Community Manager for the Computer Science Teachers Association, she is also a co-founder of the Mississippi AI Collaborative, focused on AI upskilling across the Southern U.S. A dedicated leader in broadening participation in computer science and STEM, Krystal has facilitated panels on AI at the White House, spoken at the Brookings Institution, and developed Mississippi’s first AI conference.

Ethical Use of AI – Privileging measured and deliberate thinking

Stack of books with Thinking fast and slow book by Daniel Kahneman in front
Photo by Ahmed Almakhzanji on Unsplash
by Kip Glazer

As a school leader and educator, I am deeply committed to promoting the ethical use of artificial intelligence (AI), a responsibility that we all share as AI-embedded tools become increasingly prevalent in our school systems. I strongly advocate using interactive methods, such as leveraging games to engage learners and educators in these crucial discussions (featured in this blog post by Marlon Matilla). I believe that the active participation of both researchers and practitioners is essential, and I am proud to have contributed to this vital discussion by being a part of the Educator CIRCLS community over the years.

As I think about the ethical use of AI, I can’t help but think about the design of the AI system and how it nudges the users to behave. In his book Thinking Fast and Slow, Daniel Kahneman argues that humans have System 1 thinking that reacts quickly and instinctively, while System 2 thinking reacts more slowly and deliberately. Unfortunately, our System 1 thinking (aka impulses) often overrides our System 2 thinking when making decisions, and many companies have used this to maximize profit at the expense of the consumers. As much as technology companies tout the usefulness of AI systems, I am concerned that the rapid spread of AI is amplifying the functions of System 1 thinking at the expense of System 2 thinking. Because AI prioritizes speed and volume (similar to System 1 thinking) over quality based on careful deliberation (which is the hallmark of System 2 thinking), I am concerned that we humans will not be able to avoid the temptation of choosing the quickest answers. If you believe that is not likely, I would encourage you to consider reading Nudge: The Final Edition by Thaler and Sunstein. Thaler and Sunstein argue how choice architecture influences human behaviors. They cite Kahnmen’s work extensively to clarify that even the slightest nudge easily influences all humans and can have a significant impact.

Undoubtedly, we have made significant strides in acknowledging and discussing the potential harm of AI. Our Navigating Ethical Al: Interactive Lessons and Equitable Practices for Educators webinar is a testament to how the learning sciences field is actively responding to the growing concerns about the ethical use of AI. It’s important to note that the dangers and harm of AI often do not stem from any malicious intent by its creators. Instead, they result from our automatic thinking, reinforced by Generative AI’s (GenAI) speed. However, our increased understanding and awareness can help us navigate these challenges and somewhat mitigate potential harm.

Still, I can’t help but be concerned about the proliferation of GenAI as it seems to automate and propagate products that replicate the basest human instincts. Think about the deep fakes and their ability to go viral! Think about the students attempting to shortcut their learning. Think about the educators using GenAI tools to grade all their student work to save time. Such routine behaviors are not what we typically consider to be unethical AI use. It is our automatic thinking allowing the results generated by a biased-ridden system. Even if we are aware of the potential harm of blindly trusting GenAI’s outcome, it will be difficult to eschew such influences based on simple nudges by our general environment. Most importantly, I am concerned that classroom teachers without basic AI literacy won’t be able to guide their students in this new environment to recognize the potential harm done by GenAI because they can’t compete against System 1 thinking.

To activate System 2 thinking safely and productively, teachers need support from knowledgeable leaders who can lead them. Because we recognized the importance of school leaders in implementing Systems 2 thinking, Sofía De Jesús and I wrote Framework for AI Implementation for Administrators and Leaders in K-12 Schools when we participated in the Computer Science Teachers Association’s Equity Fellowship Program. We wanted to make it easier and more manageable for the many school leaders who feel overwhelmed by the number of currently available AI-enabled tools and the feverish sales pitches encouraging them not to leave their teachers and students behind! With the framework, we hope that they can access their System 2 thinking as they consider purchasing and implementing AI-enabled tools.

Educators need to stay vigilant against the urge to choose automation and efficiency over ethical and deliberate as further AI-embedded tools are being introduced to our school ecosystems. Now more than ever before, we must activate our Systems 2 thinking to ensure we are modeling and instilling equitable values with AI in education.

References:

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Thaler, R. H., & Sunstein, C. R. (2021). Nudge: The final edition. Yale University Press.


About the Author

Kip Glazer, Ed.D. is the proud principal of Mountain View High School. A long-time practitioner participant for CIRCLS and Educator CIRCLS, she has been sharing her school leader’s perspective on various learning technologies topics, including Artificial Intelligence implementation in schools. She has been an Equity Fellow for the Computer Science Teachers Association and a member of the EngageAI Practitioner Board.

Navigating Ethical Al: Empowering Educators with Tools, Frameworks, and Critical Perspectives

Photo of Ethical AI card deck with example prompts, images, and QR codes
Photo of CRAFT Ethical Engine card game designed by Marlon Matilla
by Marlon Matilla

The Navigating Ethical Al: Interactive Lessons and Equitable Practices for Educators webinar serves as a microcosm of the broader challenges and opportunities that artificial intelligence (AI) presents in the educational landscape. The session brought together educators to explore the ethical implications of integrating AI into classrooms, highlighting the intersection between technological innovation and pedagogical responsibility.

The Ethical Imperative in AI Education

Central to the discussion was the need for educators to critically engage with AI, not just as a tool but as a complex system with far-reaching implications. Dr. Kip Glazer, principal at Mountain View High School, emphasized that understanding the technical distinctions between different types of AI—such as generative and supervised AI—is crucial for educators (see Ethical Use of AI – Privileging measured and deliberate thinking, for further thoughts from Dr. Glazer). This technical literacy forms the foundation for ethical decision-making, as educators must navigate the biases inherent in AI systems and their potential impact on students and teaching practices. The dialogue in the session reflects a growing recognition that AI’s role in education is not neutral; it is laden with ethical considerations that educators must address proactively.

Practical Engagement with AI Ethics

Assistant professor Dr. Victoria Delaney introduced the Stanford Classroom-Ready Resources About AI for Teaching (CRAFT) project, which exemplifies how these ethical considerations can be translated into classroom practice. By developing adaptable AI literacy resources, the CRAFT initiative seeks to empower teachers to integrate AI education in a way that is both practical and responsive to the needs of diverse student populations. The project underscores the importance of flexibility and customization in educational resources, recognizing that teachers must be able to tailor AI lessons to their specific classroom contexts.

This approach is further exemplified by my CRAFT Ethical Engine card game, a tool I designed to foster critical thinking and ethical reasoning among students. This game moves beyond theoretical discussions, offering a hands-on way for students to grapple with the real-world implications of AI. Through scenarios like AI in law enforcement or AI-controlled military drones, the game prompts students to consider both the benefits and risks of AI technologies, thereby cultivating a more nuanced understanding of AI ethics.

Collective Responsibility and Advocacy

The session also highlighted the collective responsibility of educators to advocate for ethical AI practices. The Educator Bill of Rights, discussed by Dr. Kip Glazer, is a testament to this advocacy. It asserts the rights of educators to have a say in the AI tools introduced into their work environments and emphasizes the need for transparency and equity in AI implementation. This document not only empowers educators to protect their professional autonomy but also ensures that AI adoption in schools does not exacerbate existing inequalities or undermine educational goals.

The session’s exploration of these themes reflects a broader narrative within education: the need for a critical, reflective approach to technology. As AI becomes increasingly integrated into classrooms, educators are not just passive recipients of these tools; they are active participants in shaping how AI is used and understood in educational settings. This requires a deep engagement with the ethical dimensions of AI, as well as a commitment to advocating for practices that are fair, transparent, and aligned with educational values.

Engaging Educators in Discussion

The CRAFT Ethical Engine card game resource presented in the session and the Educator Bill of Rights can serve as starting points for bringing educators and students into conversations about ethical issues. As the presenters emphasized in this webinar, it is important to empower educators to think critically about how to safeguard against the ethical pitfalls that these technologies can produce and bring awareness to students about potential issues.

A Unified Perspective on AI in Education

Synthesizing the insights from the session reveals a unified perspective on the role of AI in education: It is a powerful tool that holds both promise and peril. The session participants collectively underscore that the successful integration of AI into education hinges on the ability of educators to critically assess and ethically navigate these technologies. Furthermore, our conversations with educators illustrate the necessity of an ethical framework for AI in education, one that is informed by a deep understanding of the technology and a commitment to equity and fairness. It is my hope that this synthesis of ideas and the resources shared can provide guidance for educators who are navigating the complex landscape of AI. Educators need more resources to ensure they are equipped to make informed, ethical decisions that benefit both their students and the broader educational community.


About the Author

Marlon Matilla is an educator dedicated to advancing data-driven and technology-focused learning in K-12 STEM education. Since 2015, he has taught mathematics, computer science, and cybersecurity with a strong emphasis on hands-on learning. As a CIRCLS Educator Fellow, he has contributed to AI education initiatives, including the co-design of ethical AI resources through Stanford’s CRAFT Fellowship. His recent publication, Optimizing Breakfast Choices: Leveraging Data Analytics in Packaged Foods for Informed Student Nutrition Decisions, supported by the University of Arkansas’ NSF-funded Data Analytics Teacher Alliance RET program, is published in the ASEE Professional Engineering Education Repository. Committed to merging research with practice, Marlon (aka Matt) aims to continue as a researcher-educator, fostering data literacy and ethical AI technology use in education.