Category Archives: AI in Education

Engaging Educators in Emerging Technology Research

book

Image by Tung Lam from Pixabay

by Cassandra Kelley, Sarina Saran, Deniz Sonmez Unal, and Erin Walker

This blog post discusses the outcomes of an Educator CIRCLS workshop that disseminated computer science education research findings to practitioners while prompting broader discussions of AI in classrooms

This past summer and fall of 2024, Educator CIRCLS hosted a series of webinars, workshops, and convenings between researchers and practitioners focused on artificial intelligence (AI) literacy. Specifically, they were designed to engage participants in reflective conversations about ethics, equity, and other problems or possibilities of practice concerning the integration of AI (especially genAI) in PreK-12 education.

As part of this series, our team from the University of Pittsburgh piloted a novel strategy for research dissemination, in which we developed supplemental curricular resources or guided activities and shared them with educators in a workshop format. The goals behind these activities were twofold:

  • To facilitate discussion among educators about current research on the integration of emerging technologies that incorporate AI (e.g., robots and intelligent tutoring systems) and how they might impact the future of learning in education settings, and
  • To provide a mechanism for educators to think critically about ways to introduce elements of AI literacy to students via real world exercises that can simulate the work that researchers are doing (see Translating Research on Emerging Technologies for Educators for further background context about the design of this workshop).

During the planning stage of the workshop, we felt it was pertinent to get a better understanding of PreK-12 teachers’ experiences with professional learning for computer science (CS) education. We wanted to speak directly with them about the impact of these experiences on their practice and seek their recommendations for how these professional development programs are designed.

We interviewed 20 educators from 16 states, who taught across different grade levels and/or content areas. Most interviewees felt a disconnect with research dissemination as a form of professional learning and expressed their desire to better understand how emerging technologies connect with research-based practices and learning theories. They discussed how previous workshops they have attended either focus directly on the technology tools or on a mandated “turnkey curriculum” based on rote memorization and knowledge transfer (e.g., Advanced Placement CS course materials). Teachers expressed how they appreciated receiving curricular resources because such resources help them to stay current in this ever-evolving field. They would like to see less “direct instruction” lessons and more real-world approaches with project-based or problem-based learning (PBL) that promote inquiry—similar to what is expected in the industry. They also emphasized the need for further collaborative opportunities to ideate on promoting digital/AI literacy through their instruction.

Following our conversations with teachers, we intentionally designed a workshop with guided activities, based on research projects on emerging technologies, that could expose practitioners to existing literature and findings while potentially seeding new ideas for curricula. Our workshop design incorporated the following structure: (1) outline the theoretical framework and CS concepts, (2) have participants experience different roles (e.g., student, educator, and researcher) within inquiry-based activities, (3) share project research findings, (4) discuss implications for practice and ways to address AI literacy, and (5) reflect on the overall format of the workshop and considering how to improve the design of future workshops.

We featured two research projects:

Our first session on teachable robots presented a research project that examined middle school students’ interactions with Nao robots in mathematics instruction. Participants were asked to think about the design and implementation challenges in building a robotic dialogue system for learning from the perspective of a student, educator, and a researcher. They explored CS concepts related to Natural Language Processing (NLP) by: (1) determining keywords used in solving a math problem, (2) reviewing sample dialogue scripts and Artificial Intelligence Mark-up Language (AIML) that researchers used to program the Nao robot, and (3) interacting with prototype simulations created in Pandorabots that represented social and nonsocial versions of a chatbot. We also shared further extensions that could potentially be remixed or adapted for use with students, such as revising the dialogue by adding more social elements, writing a new script for solving a different math problem in AIML, developing a chatbot to test the code, or experimenting with a program such as Scratch to create a dialogue between two sprites.

Our second session on neuroimaging and educational data-mining presented a research project that examined how students process information while interacting with intelligent tutoring systems. A major component of this study focused on the analysis of data collected by these systems to uncover patterns or trends that can inform and potentially improve teaching and learning practices. Additionally, neuroimaging brain data was collected as a proof of concept to explore how it might be analyzed to better understand how cognition, attention, and emotion affect learning (for further background on how this equipment works, see Neuroscience in Education). Similar to the first workshop, we presented guided activities to help participants think about the design of intelligent tutoring systems and the types of data collected ; participants created their own data visualizations from sample datasets for analysis using the free educational software, Common Online Data Analysis Platform (CODAP) and categorized example brain activation images based on the corresponding levels of task difficulty. Further extension activities were shared, such as outlining specific actions that an intelligent tutoring system might take to provide feedback (e.g., hints, prompting questions, or praise) in response to student behavior and debunking “neuro-myths” in education.

At the conclusion of each workshop, we asked educators their thoughts about the potential benefits and challenges of integrating these emerging technologies in PreK-12 classrooms and what they would like future research to explore. Our goal was to hear practitioner voices and gather input for researchers and developers to consider. This led to a focused discussion on the need to promote AI literacy in education, especially to address ethics and transparency.

Key takeaways from the experience are:

  • Teachers appreciate the opportunity to learn more about innovative research projects, but they especially like the idea of being in dialogue with researchers and potentially playing a role in the work that’s being done. Many volunteered to pilot future projects exploring the implementation of curricula and/or emerging technologies with their students if invited.
  • Teachers expressed that the content in our guided activities, while rigorous, enabled them to be more reflective. They were engaged with the hands-on simulations of the research and discussed how “active learning helped to promote deeper thinking.” As one participant mentioned, the activities allowed her to “think outside of the normal pedagogy box.”
  • Teachers had mixed feelings on the relevance of the workshop content and how to bring it into their schools or classrooms. Some thought it would be challenging to implement the activities with students due to external factors and other curricular mandates. As one participant stated, “one tension with cutting-edge research is that it’s difficult to be practical in the moment. I think you’re on the right track with scaling down the technology or bringing the insights to the classroom level…this [workshop] is way more effective than most formats, but I think you would have a difficult time getting educators to opt in.” Meanwhile another participant said, “in both workshops, the concepts and practice of the teachable bot and neuroimaging was beyond the ‘here and now’ of teaching and learning, but the examination of how our current concepts of pedagogy may change as we catch up to the technology.” Additionally, several teachers discussed how the workshop offered new ways for them to think about bringing in real-world data and student-led projects to promote further inquiry and AI literacy.
  • Teachers valued the opportunity to collaborate with other educators and researchers. They liked exploring different lenses (e.g., student, teacher, and researcher) while engaging in reflective discussions about the impact of research on their practice. One teacher highlighted how it felt like a “safe space to troubleshoot uses of AI and educational data mining” and another expressed appreciation for “garnering others’ experiences to get further ideas for their own classroom.”

Based on overall positive feedback from our teacher participants, we believe this research dissemination workshop model is worth exploring with other projects, especially since educators felt they were able to take something meaningful away from the experience. As one participant stated, “I feel very fortunate to be involved in this work. I’m very happy that your team is working to push the boundaries of how we learn and teach.” This gives us hope that researchers will consider the importance of collaborating and co-designing with educators. Additionally, this work validates the need for further mediation between research and practice, which potentially can include creating new roles for “knowledge brokers” (Levin, 2013) to promote further dialogue across these boundaries in order to truly make a broader impact.

Thank you to Sarah Hampton and Dr. Judi Fusco for their thinking and feedback on this post.

References:

Levin, B. (2013, February). To know is not enough: Research knowledge and its use. Review of education, 1(1), 2-31. DOI: 10.1002/rev3.3001


About the Authors

Cassandra Kelley, Ed.D. has over fifteen years of experience in K-12 and teacher education. She earned her doctorate degree in Learning Technologies from Pepperdine University and is passionate about exploring new tools that can improve teaching and learning. She currently serves as a Broader Impacts Project Coordinator at the University of Pittsburgh and supports CIRCLS with Expertise Exchanges in the AI CIRCLS and Educator CIRCLS sub communities. Cassandra also teaches graduate courses for National University in the Master of Science in Designing Instructional and Educational Technology (MSDIET) Program.

Sarina Saran is a third-year undergraduate student at the University of Pittsburgh pursuing a B.S. in Computer Science and a B.A. in Media and Professional Communications. She is curious about the intersection of technology and communication, and she has been able to develop a greater understanding of the challenges in this area as an Undergraduate Research Assistant in the Office of Broader Impacts.

Deniz Sonmez Unal is a Ph.D. candidate in Intelligent Systems at the University of Pittsburgh. Her research focuses on modeling student cognitive states using multimodal data, including interaction logs, verbal protocol data, and neural signals to enhance the diagnostic capabilities of intelligent tutoring systems.

Erin Walker, Ph.D. is a co-PI of CIRCLS and a tenured Associate Professor at the University of Pittsburgh, with joint appointments in Computer Science and the Learning Research and Development Center. She uses interdisciplinary methods to improve the design and implementation of educational technology, and then to understand when and why it is effective. Her current focus is to examine how artificial intelligence techniques can be applied to support social human- human and human agent learning interactions.

Translating Research on Emerging Technologies for Educators

Image by mcmurryjulie from Pixabay
by Cassandra Kelley

This blog post discusses the development of an Educator CIRCLS workshop aimed to “translate” or disseminate computer science education research findings to practitioners while promoting AI literacy.

Have you ever played the telephone game, where a sentence is whispered into someone’s ear and passed from person to person, until the final person reveals the message aloud to see how closely it aligns with what was originally said? I am frequently reminded of this childhood game in my role as the Broader Impacts Project Coordinator at the University of Pittsburgh and CIRCLS, where I think about how we can “translate” research into practice for practitioners; however, the game has become much more challenging due to the technical terminology, academic jargon (e.g., research methodologies), and other contextual phrases that are often included within the message being communicated. Moreover, all of the players have individual “language barriers” (e.g., prior knowledge, experience, expertise, etc.) that add another layer of difficulty to ensure the mediated message is comprehensive for all.

My broader impacts position, inspired by the National Science Foundation (NSF) merit review criteria, was created as an avenue for broadly disseminating research on emerging technologies for teaching and learning—similar to programs such as Research Practice Partnerships (RPP) or Research Experiences for Teachers (RET). I was drawn to this opportunity because I feel it is critical not only for educators to learn about and understand education research, but also for researchers to consider the direct impact of their work on practice. I firmly believe in the importance of bridging the gap that currently exists between research and practice by promoting partnerships among all stakeholders, which can include further engagement in participatory research and involvement in co-design models.

As a former PreK-12 educator and higher education faculty member supporting pre-service teachers, my initial concern about this translation process was thinking about how students and teachers will benefit. From my own experience working in school systems, I have observed an institutional culture where research and policy are “thrown” at teachers through mandates and other recommendations. Educators’ voices are often missing from the conversation and there is not an immediate focus on how to best support their practices, which truly should center on the needs of students. I have also witnessed researchers temporarily engaging with educators for the purpose of conducting a study and then disappearing, which I personally know can feel like a one-sided transaction.

These factors led me to consider novel strategies for research dissemination that could potentially build stronger connections between researchers and practitioners. Specifically, I wanted to explore the development of supplemental curricular resources to be shared with teachers during a workshop so they could have opportunities to: (1) interact with computer science (CS) education concepts and understand their relationship within research findings, (2) experience the role of a learner and researcher, (3) engage in discussion with other educators and researchers about the impact of specific research projects on practice—especially with regard to the integration of emerging technologies, and ultimately to (4) bring elements of their professional learning back into the classroom via guided activities that could be adapted for implementation with students.

It is important to note that these goals were shaped through many discussions with practitioners, especially after having the opportunity to speak directly with 20 educators about their experiences as participants in professional development programs for CS education. I sought their recommendations for how we might design and structure a workshop to disseminate research findings via our supplemental curricular resources (see Engaging Educators in Emerging Technology Research for further details about the facilitation of this workshop). Throughout these reflective conversations, it was frequently mentioned how most programs tend to be “technocentric” and focus more on “new shiny technology tools” rather than pedagogy for classroom integration or research-based practices and learning theories. Educators advocated for further rigor and inquiry-based activities that immerse them into the research literature, paired with opportunities for collaboration and the exchange of ideas or curricular resources; each of these elements would be intentionally incorporated into our workshop design.

Additionally, I connected with members of interdisciplinary research teams to better understand their different areas of expertise and the methodologies used across projects. I had to consider the application of CS terminology and concepts within each project (many of which were new to me) and pinpoint the key areas to focus on in the translation. Fortunately, I was introduced to a new undergraduate student in the lab who was double majoring in both CS and communication. She was eager to help and became a translator for me when I wore my “learner hat,” similar to how a teacher’s assistant or tutor might provide direct instruction to dive deeper into the content. Likewise, I would then put on my “teacher hat” and explain pedagogical concepts (e.g., scaffolding, asking different levels of questions, Universal Design for Learning- UDL strategies, etc.) or learning theories (e.g., constructivism, sociocultural learning, project-based learning, etc.) while we discussed how we could take research findings and use them to develop supplemental curricula or guided activities for dissemination to educators.

A final consideration in the development of these guided activities was how to simulate the research procedures in an immersive way without the technology equipment. This was necessary because we wanted to acknowledge potential constraints of implementation in schools, such as access issues and the need for further technical support or training—not to mention how expensive these emerging technologies are. Therefore, we engaged in further conversations with the research teams about how we might develop user-friendly prototypes of simulations that educators could interact with on their own devices. Our discussions reminded us that there may be further barriers to research dissemination in the traditional schooling environment including challenges with existing curricula requirements and/or scheduling constraints. For this reason, we decided it might make more sense to frame our activities as supplemental or enrichment materials that can be adapted/remixed across a variety of settings (e.g., after school programs or summer camps).

In summary, these convenings with researchers and practitioners across what Wenger-Trayner and colleagues (2014) refer to as the “boundaries in landscapes of practice” helped us consider the institutional culture bounding each landscape. We found it extremely valuable learning from multiple perspectives and using these insights to help us identify existing boundaries and ways to collectively navigate them.

Key takeaways from the experience are:

  • Acknowledge the systemic barriers with regard to education policy and practices in different community settings.
  • Engage in learning partnerships by collaboratively negotiating and exploring the existing boundaries. This includes actively listening to all voices (e.g., researchers and practitioners) from different landscapes (e.g., PreK-12 education, higher education, and industry) to create a two-way dialogue of mutual reflection.
  • Focus less on the technology and more on the diffusion of innovative ideas as well as the AI literacy needed by ALL stakeholders for advancement of these ideas.
  • Develop immersive guided activities that promote further conversations about AI literacy while being grounded in research and learning theories. Be sure to clearly communicate these connections when translating back-and-forth and offer opportunities for reflective discussion.
  • Seek feedback at every stage of the iterative process and prioritize the community partnerships across the landscapes of practice above all.
  • Remember that the ultimate shared goal or vision is to positively impact the future of learning for students.

Since I personally identify as both a researcher and practitioner, I have learned firsthand the importance of negotiating my own experiences to build a bridge between my understanding of the teaching practice and students’ needs, while also thinking critically about advancing the field of education research. In order to bring these landscapes together, researchers must consider ways to make their work more accessible so they can get the necessary buy-in from teachers that will propel institutional change and innovation in the future of schooling and education. Likewise, educators need to keep seeking opportunities to stay abreast of current research findings, especially to help lead this exploration of new pedagogical practices or emerging technologies that can support teaching and learning. One avenue to achieve this is establishing sustained partnerships between researchers and practitioners through co-design or participatory research. Moreover, the incorporation of “intermediaries” or “knowledge brokers,” which Levin (2013) defines as “people or organizations that translate or transmit research,” similar to my position as a Broader Impacts Project Coordinator, can “play a critical role in the process of diffusing ideas and practices in education” (p. 21). It is my hope that researchers will take into account how emerging scholars, such as school administrators, academic coaches, a subgroup of PreK-12 teachers, post-doctoral students, or graduate students, might be leveraged to help spearhead this essential translation of research into practice.

Thank you to Sarina Saran, Deniz Sonmez Unal, Sarah Hampton, Dr. Erin Walker, and Dr. Judi Fusco for their thinking and feedback on this post.

References:

Levin, B. (2013, February). To know is not enough: Research knowledge and its use. Review of education, 1(1), 2-31. DOI: 10.1002/rev3.3001

Wenger-Trayner, E., Fenton-O’Creevy, M., Hutchinson, S., Kubiak, C., & Wenger-Trayner, B. (Eds.). (2014). Learning in landscapes of practice: Boundaries, identity, and knowledgeability in practice-based learning. Routledge.


About the Author

Cassandra Kelley, Ed.D. has over fifteen years of experience in PreK-12 and teacher education. She earned her doctorate degree in Learning Technologies from Pepperdine University and is passionate about exploring new tools that can improve teaching and learning. She currently serves as a Broader Impacts Project Coordinator at the University of Pittsburgh and supports CIRCLS with Expertise Exchanges in the AI CIRCLS and Educator CIRCLS sub communities. Cassandra also teaches graduate courses for National University in the Master of Science in Designing Instructional and Educational Technology (MSDIET) Program.

The Convenience of Inconvenience: Exploring Connections with Researchers and Practitioners

AI letters surrounded by electronic cords
Image by Steve Johnson on Unsplash
by Nneka McGee and Krystal Chatman

The goals of change are aspirational. In the artificial intelligence (AI) era, we see change as a catalyst that will bring about the true transformation in education many of us have sought for decades. We all agree that change is necessary, right? Therefore, it’s only a matter of time before aspirations become concrete. Though we should continue to strive for the best ideas, there is one reality we must face: The difficulty of change is an inconvenient truth.

While change can be a challenge, particularly in education, hope is still a constant. Artificial intelligence and other emerging technologies have potential, but what are optimal ways to incorporate them into learning environments? Researchers have long explored the questions and possible answers of improving educational outcomes. Practitioners apply real-world experiences that significantly impact learners. The premise of bringing these disparate groups together offers a promising solution to understanding change.

In summer and fall of 2024, the Center for Integrative Research in Computing and Learning Sciences (CIRCLS) facilitated a convening of researchers and practitioners to explore AI problems of practice in PreK-12 education (for further details, see Educator CIRCLS ’24 Summer Series). An innovative act in itself, as it is not often the explorers of educational theory (researchers) are provided with the opportunity to collaborate with the facilitators (teachers) of the suggested strategies. This convening offered an opportunity for both sides to engage at a deeper level, gaining insight into how the “other side” operates to improve their practice. Through discussions, researchers had the opportunity to convey how valuable the experiences of practitioners are to the work of advancing innovation in education. Practitioners shared their interest in research areas such as generative AI and expressed their concerns about helping students navigate the future of learning.

Problems of Practice

Dialogue was a primary focus of the convening. To bridge the distance between practice and research, CIRCLS facilitators divided attendees into groups with the goal of centering conversations on what practitioners experience within their educational ecosystems. Through in-depth discussions, researchers and practitioners identified areas of focus by posing questions such as:

  • Do we need multiple literacies?
  • How do we address accessibility issues?
  • How do we use AI tools and what are the benefits?
  • How do we include others across the spectrum to include more voices in developing AI tools?
  • How can we use AI to support educator coaching?

AI literacy was a common thread among the groups. There was a collective acknowledgement of the limitations surrounding this emerging technology given the current state of training at educational institutions. Without the appropriate requisite foundations on how to use AI effectively, its potential to transform education is unclear. Additional conversations explored the limitations practitioners face with utilizing or implementing AI tools due to policies and how use of AI in learning spaces cannot swiftly progress until practitioner decision-makers gain more knowledge and confidence in AI use. Although researchers may be working on other areas, the conversations provided essential insights into the needs and wonderings of practitioners.

Research to Action

During a second round of grouping, researchers also sought input on their research topics and potential impact in today’s classrooms. Recruiting research subjects has always been a challenge; therefore, informing practitioners of research topics opened avenues for feedback and questions related to applicability and feasibility. Several topics emerged from the group, including:

  • Collaborative learning
  • Student voice
  • Systemic challenges
  • AI scoring
  • Policy development
  • Navigating tensions
  • Centering pedagogy

Just as in the prior group discussions, AI literacy was a popular talking point, often interwoven within the topics listed above.

When groups reconvened, all agreed that more conversation was necessary. More action was also on the minds of researchers and practitioners, but the realities of change were ever present. Several participants expressed concern about the trajectory of AI in education, particularly when considering access, bias, safety, security, and environmental impact. Despite misgivings, there was an overall sense of optimism about the future of research and the contributions of practitioners in propelling use of emerging technologies forward.


About the Authors

Nneka J. McGee, J..D., Ed.D. is an educational consultant and former Chief Academic Officer dedicated to ensuring students have access and opportunities to explore AI and emerging technologies. She completed her doctoral studies on AI and its implementation in K-12 classrooms. She is a frequent speaker on AI topics and serves as a member of several AI-focused advisory boards and committees.

Krystal Chatman is a seasoned educator and advocate with over 15 years of experience in integrating technology into teaching and promoting equitable computer science education. Currently serving as the Professional Learning & Community Manager for the Computer Science Teachers Association, she is also a co-founder of the Mississippi AI Collaborative, focused on AI upskilling across the Southern U.S. A dedicated leader in broadening participation in computer science and STEM, Krystal has facilitated panels on AI at the White House, spoken at the Brookings Institution, and developed Mississippi’s first AI conference.

Ethical Use of AI – Privileging measured and deliberate thinking

Stack of books with Thinking fast and slow book by Daniel Kahneman in front
Photo by Ahmed Almakhzanji on Unsplash
by Kip Glazer

As a school leader and educator, I am deeply committed to promoting the ethical use of artificial intelligence (AI), a responsibility that we all share as AI-embedded tools become increasingly prevalent in our school systems. I strongly advocate using interactive methods, such as leveraging games to engage learners and educators in these crucial discussions (featured in this blog post by Marlon Matilla). I believe that the active participation of both researchers and practitioners is essential, and I am proud to have contributed to this vital discussion by being a part of the Educator CIRCLS community over the years.

As I think about the ethical use of AI, I can’t help but think about the design of the AI system and how it nudges the users to behave. In his book Thinking Fast and Slow, Daniel Kahneman argues that humans have System 1 thinking that reacts quickly and instinctively, while System 2 thinking reacts more slowly and deliberately. Unfortunately, our System 1 thinking (aka impulses) often overrides our System 2 thinking when making decisions, and many companies have used this to maximize profit at the expense of the consumers. As much as technology companies tout the usefulness of AI systems, I am concerned that the rapid spread of AI is amplifying the functions of System 1 thinking at the expense of System 2 thinking. Because AI prioritizes speed and volume (similar to System 1 thinking) over quality based on careful deliberation (which is the hallmark of System 2 thinking), I am concerned that we humans will not be able to avoid the temptation of choosing the quickest answers. If you believe that is not likely, I would encourage you to consider reading Nudge: The Final Edition by Thaler and Sunstein. Thaler and Sunstein argue how choice architecture influences human behaviors. They cite Kahnmen’s work extensively to clarify that even the slightest nudge easily influences all humans and can have a significant impact.

Undoubtedly, we have made significant strides in acknowledging and discussing the potential harm of AI. Our Navigating Ethical Al: Interactive Lessons and Equitable Practices for Educators webinar is a testament to how the learning sciences field is actively responding to the growing concerns about the ethical use of AI. It’s important to note that the dangers and harm of AI often do not stem from any malicious intent by its creators. Instead, they result from our automatic thinking, reinforced by Generative AI’s (GenAI) speed. However, our increased understanding and awareness can help us navigate these challenges and somewhat mitigate potential harm.

Still, I can’t help but be concerned about the proliferation of GenAI as it seems to automate and propagate products that replicate the basest human instincts. Think about the deep fakes and their ability to go viral! Think about the students attempting to shortcut their learning. Think about the educators using GenAI tools to grade all their student work to save time. Such routine behaviors are not what we typically consider to be unethical AI use. It is our automatic thinking allowing the results generated by a biased-ridden system. Even if we are aware of the potential harm of blindly trusting GenAI’s outcome, it will be difficult to eschew such influences based on simple nudges by our general environment. Most importantly, I am concerned that classroom teachers without basic AI literacy won’t be able to guide their students in this new environment to recognize the potential harm done by GenAI because they can’t compete against System 1 thinking.

To activate System 2 thinking safely and productively, teachers need support from knowledgeable leaders who can lead them. Because we recognized the importance of school leaders in implementing Systems 2 thinking, Sofía De Jesús and I wrote Framework for AI Implementation for Administrators and Leaders in K-12 Schools when we participated in the Computer Science Teachers Association’s Equity Fellowship Program. We wanted to make it easier and more manageable for the many school leaders who feel overwhelmed by the number of currently available AI-enabled tools and the feverish sales pitches encouraging them not to leave their teachers and students behind! With the framework, we hope that they can access their System 2 thinking as they consider purchasing and implementing AI-enabled tools.

Educators need to stay vigilant against the urge to choose automation and efficiency over ethical and deliberate as further AI-embedded tools are being introduced to our school ecosystems. Now more than ever before, we must activate our Systems 2 thinking to ensure we are modeling and instilling equitable values with AI in education.

References:

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Thaler, R. H., & Sunstein, C. R. (2021). Nudge: The final edition. Yale University Press.


About the Author

Kip Glazer, Ed.D. is the proud principal of Mountain View High School. A long-time practitioner participant for CIRCLS and Educator CIRCLS, she has been sharing her school leader’s perspective on various learning technologies topics, including Artificial Intelligence implementation in schools. She has been an Equity Fellow for the Computer Science Teachers Association and a member of the EngageAI Practitioner Board.

Navigating Ethical Al: Empowering Educators with Tools, Frameworks, and Critical Perspectives

Photo of Ethical AI card deck with example prompts, images, and QR codes
Photo of CRAFT Ethical Engine card game designed by Marlon Matilla
by Marlon Matilla

The Navigating Ethical Al: Interactive Lessons and Equitable Practices for Educators webinar serves as a microcosm of the broader challenges and opportunities that artificial intelligence (AI) presents in the educational landscape. The session brought together educators to explore the ethical implications of integrating AI into classrooms, highlighting the intersection between technological innovation and pedagogical responsibility.

The Ethical Imperative in AI Education

Central to the discussion was the need for educators to critically engage with AI, not just as a tool but as a complex system with far-reaching implications. Dr. Kip Glazer, principal at Mountain View High School, emphasized that understanding the technical distinctions between different types of AI—such as generative and supervised AI—is crucial for educators (see Ethical Use of AI – Privileging measured and deliberate thinking, for further thoughts from Dr. Glazer). This technical literacy forms the foundation for ethical decision-making, as educators must navigate the biases inherent in AI systems and their potential impact on students and teaching practices. The dialogue in the session reflects a growing recognition that AI’s role in education is not neutral; it is laden with ethical considerations that educators must address proactively.

Practical Engagement with AI Ethics

Assistant professor Dr. Victoria Delaney introduced the Stanford Classroom-Ready Resources About AI for Teaching (CRAFT) project, which exemplifies how these ethical considerations can be translated into classroom practice. By developing adaptable AI literacy resources, the CRAFT initiative seeks to empower teachers to integrate AI education in a way that is both practical and responsive to the needs of diverse student populations. The project underscores the importance of flexibility and customization in educational resources, recognizing that teachers must be able to tailor AI lessons to their specific classroom contexts.

This approach is further exemplified by my CRAFT Ethical Engine card game, a tool I designed to foster critical thinking and ethical reasoning among students. This game moves beyond theoretical discussions, offering a hands-on way for students to grapple with the real-world implications of AI. Through scenarios like AI in law enforcement or AI-controlled military drones, the game prompts students to consider both the benefits and risks of AI technologies, thereby cultivating a more nuanced understanding of AI ethics.

Collective Responsibility and Advocacy

The session also highlighted the collective responsibility of educators to advocate for ethical AI practices. The Educator Bill of Rights, discussed by Dr. Kip Glazer, is a testament to this advocacy. It asserts the rights of educators to have a say in the AI tools introduced into their work environments and emphasizes the need for transparency and equity in AI implementation. This document not only empowers educators to protect their professional autonomy but also ensures that AI adoption in schools does not exacerbate existing inequalities or undermine educational goals.

The session’s exploration of these themes reflects a broader narrative within education: the need for a critical, reflective approach to technology. As AI becomes increasingly integrated into classrooms, educators are not just passive recipients of these tools; they are active participants in shaping how AI is used and understood in educational settings. This requires a deep engagement with the ethical dimensions of AI, as well as a commitment to advocating for practices that are fair, transparent, and aligned with educational values.

Engaging Educators in Discussion

The CRAFT Ethical Engine card game resource presented in the session and the Educator Bill of Rights can serve as starting points for bringing educators and students into conversations about ethical issues. As the presenters emphasized in this webinar, it is important to empower educators to think critically about how to safeguard against the ethical pitfalls that these technologies can produce and bring awareness to students about potential issues.

A Unified Perspective on AI in Education

Synthesizing the insights from the session reveals a unified perspective on the role of AI in education: It is a powerful tool that holds both promise and peril. The session participants collectively underscore that the successful integration of AI into education hinges on the ability of educators to critically assess and ethically navigate these technologies. Furthermore, our conversations with educators illustrate the necessity of an ethical framework for AI in education, one that is informed by a deep understanding of the technology and a commitment to equity and fairness. It is my hope that this synthesis of ideas and the resources shared can provide guidance for educators who are navigating the complex landscape of AI. Educators need more resources to ensure they are equipped to make informed, ethical decisions that benefit both their students and the broader educational community.


About the Author

Marlon Matilla is an educator dedicated to advancing data-driven and technology-focused learning in K-12 STEM education. Since 2015, he has taught mathematics, computer science, and cybersecurity with a strong emphasis on hands-on learning. As a CIRCLS Educator Fellow, he has contributed to AI education initiatives, including the co-design of ethical AI resources through Stanford’s CRAFT Fellowship. His recent publication, Optimizing Breakfast Choices: Leveraging Data Analytics in Packaged Foods for Informed Student Nutrition Decisions, supported by the University of Arkansas’ NSF-funded Data Analytics Teacher Alliance RET program, is published in the ASEE Professional Engineering Education Repository. Committed to merging research with practice, Marlon (aka Matt) aims to continue as a researcher-educator, fostering data literacy and ethical AI technology use in education.

CRAFTing Better Learning Experiences: Infusing GenAI in Education Effectively and Ethically

CRAFT Framework in table format
CRAFT Framework by Andrew Fenstermaker, Drew Olsson and Sarah Hampton
by Andrew Fenstermaker, Drew Olsson, and Sarah Hampton

Introduction

Generative artificial intelligence (GenAI) stands to be a disruptive technology in education and all facets of our daily life. While this technology offers significant advantages for teaching and learning, it hinders the process when used without a full understanding of how the technology works and how to evaluate the content generated. The educator must remain the expert, advocate, arbiter, human in the loop identifying why and when the technology gets used, and the critical evaluator to uphold the best of our human ideals.

CRAFT Framework Overview

Using GenAI to augment the lesson design process can seem overwhelming. From composing and revising prompts to evaluating the outputs, integrating GenAI requires a new set of literacy skills. CRAFT was collaboratively designed by Andrew Fenstermaker, Drew Olsson, and Sarah Hampton and augmented using GenAI. The framework serves as a step-by-step roadmap that scaffolds the process of infusing GenAI with the learning sciences to improve learning experiences ethically.

1. Create
The first step in the framework is to create a lesson plan using GenAI prompting that is grounded in learning sciences based on a specific standard, age group or grade level, and time frame. Often, we start with a basic prompt providing no persona or context for GenAI to use in its algorithm as it generates the output. As you can see in this example of generating a lesson plan on the main idea, the chatbot makes inferences about the grade level being taught, length of time, and materials available. The chatbot is simply following its algorithm to predict the next word in its sequence of constructing a complete lesson plan.

Giving a chatbot a persona, such as an expert teacher, and providing more details up front can enhance its recommendations. Reviewing the output from the Detailed Prompt example, you will see that the targeted grade level, specific standard, and length of time are now tailored to our prompt. We can improve the outputs further by including a request for evidence-based best education practices from learning sciences research up front. Therefore, the goal of the create step in the CRAFT framework is to underpin a detailed prompt with learning sciences.

Comparison table of basic prompts, detailed prompts, and detailed + learning sciences + chain of thought prompts

In addition, the results from the prompt with learning sciences can be optimized using chain-of-thought prompting. Chain-of-thought prompting is a technique that allows us to take a complex task and decompose it into a series of logical steps to increase the performance of the large language model (chatbot). As you explore the Detailed Prompt + learning sciences + Chain-of-Thought prompts, you will see that a series of 11 chain-of-thought prompts were used to continually refine the output. Decomposition methods were used as we broke our complex task into smaller chunks, from identifying effect size and selecting an evidence-based learning strategy to incorporating elements aligned with student interests and crafting sentence starters to scaffold the process for students.

2. Review
Advancements in GenAI capabilities and detailed prompting strategies may lull the user in “falling asleep at the wheel.” That is, over-relying on outputs without going through the process of refining its outputs for a given context with intentional consideration. The CARE framework is designed to critically evaluate each output for Clarity, Accuracy, Relevance, and Ethics.

  • Clarity– A lack of clarity in GenAI output might mean the output is not appropriate for a given audience such as overly wordy sentences for a 5th grade class.
  • Accuracy– It is well known that AI makes things up, from citing studies that never existed to stating facts there are blatantly false. It is up to the user to fact check GenAI outputs so that misinformation is not shared in a classroom setting.
  • Relevance– Often, an AI’s first output will not generate a response that matches the intent of the input given. Continue refining your prompts until the output fits your needs.
  • EthicsGenAI is trained on biased data and can be racist. Inspecting outputs for possible stereotypical depictions or biased results is imperative to uphold integrity and respect for all.

3. Amplify
Once we have reviewed our lesson plan grounded in learning sciences using the CARE framework, we are ready for the Amplify stage of CRAFT. The recent National Ed Tech Plan highlights three different divides with instructional technology: Access, Design and Use. The COVID-19 pandemic sparked a rapid influx of instructional technology, decreasing the access divide but exposing prominent divides in how teachers were designing digital learning as well as how students were using the digital tools and resources.

The amplify stage fosters opportunities for teachers to work towards minimizing the Design and Use divides by using GenAI prompts centered around the SAMR (substitution, augmentation, modification, redefinition) model. The graphic below illustrates the progression from beginning with a standard, using chain-of-thought prompting grounded in learning sciences, reviewing with the CARE framework, and concluding with instructional technology enhancements through the SAMR model.

Flow chart demonstrating how a standard can be used to prompt AI with both the learning sciences and SAMR framework for lesson plan design

As outlined before, we continue to use chain-of-thought prompting to further refine our outputs. Within our amplified SAMR lesson plan, you can see that the additional prompts help refocus the large language model (chatbot) when the algorithm deviates from our original vision and allow us to select the specific level(s) of SAMR we wish to incorporate within our final lesson. Just as we did before, we must leverage the review stage to critically evaluate the outputs using the CARE framework.

4. Fine-tune
Leveraging the fine-tune stage allows us to ensure that our lesson is both effective and equitable by carefully evaluating the integration of instructional technology. This includes considering the context of use, implementation strategies, sustainability factors, and inclusivity for all learners. By focusing on these key areas, we can enhance the technology’s impact and ensure it aligns with our pedagogical goals.

Graphical representation of technology considerations in the fine-tune stage including context, implementation, sustainability, and inclusivity.

5. Transform
The final stage of the CRAFT framework does not require additional prompting or reviewing of outputs. Instead, by implementing the filtered enhancements, teachers transform the learning experience, making it more engaging, interactive, and effective for their students.

Conclusion and Call to Action

As GenAI continues to evolve, it is essential for educators to remain the experts in their classrooms and use technology as a tool to enhance, not dictate, their practices. We encourage you to employ the CRAFT Framework to center students in the learning design process. It is through the intentional underpinning of learning sciences that we can remove barriers to create rich learning experiences for all students.

Resources
Link to PDF of CRAFT (two pager)


About the Authors

Andrew Fenstermaker is the Instructional Technology Coordinator for the Iowa City School District. A perpetual learner who infused emerging technologies into his own classroom for ten years now works to empower educators through dynamic professional development, one-on-one coaching, and innovative lesson design that centers students and removes barriers to success. He is a Google Certified Coach and Innovator, leading efforts locally and nationally on adopting and scaling computational thinking and AI in education while sharing key deliverables through presentations and publications.

Drew Olsson is the Technology Integration Coordinator for the Agua Fria High School District. An advocate for staff and student AI Integration, mindful EdTech implementation, and building tech literacy for all. He taught math and computer science for 9 years before moving into his current role where he services 5 comprehensive high schools and over 10,000 students. He is invested in providing powerful learning opportunities for all students so that they may thrive in an increasingly techno-centric world. Drew holds Master’s Degrees in Secondary Education and Educational Leadership from Arizona State University.

Sarah Hampton is a Technology and Curriculum Specialist for the Greenbrier County School District specializing in secondary math education. Prior to her current role, she brought passion for evidence-based instructional strategies and thoughtful technology integration to her middle and high school math and science classrooms. A veteran educator of 15+ years, Sarah works to bring the benefits of education research to more students through embedded professional development in her district and through collaboration with researchers and educators at the Center for Integrative Research in Computing and Learning Sciences.

Glossary of Artificial Intelligence Terms for Educators

A blue robot looking windup toy with a metal coil going across its head. Red feet
Photo by Rock’n Roll Monkey on Unsplash
by Pati Ruiz and Judi Fusco

This glossary was written for educators to reference when learning about and using artificial intelligence (AI). We will start with a definition of artificial intelligence and then provide definitions of AI-related terms in alphabetical order. This glossary was last updated on March 31, 2024

Artificial Intelligence (AI): AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways, two of the primary ways are: (1) through the use of rules provided by a human (rule-based systems); or (2) with machine learning algorithms. Many newer AI systems use machine learning (see definition of machine learning below).

________________________________________________
Algorithm: Algorithms are the “brains” of an AI system and what determines decisions in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based where human programmers give the rules.

Artificial General Intelligence (AGI): Artificial general intelligence has not yet been realized and would be when an AI system can learn, understand, and solve any problem that a human can.

Artificial Narrow Intelligence (ANI): AI can solve narrow problems and this is called artificial narrow intelligence. For example, a smartphone can use facial recognition to identify photos of an individual in the Photos app, but that same system cannot identify sounds.

Generative AI (GenAI): A type of machine learning that generates content, currently such as text, images, music, videos, and can create 3D models from 2D input. See ChatGPT definition, ChatGPT is a specific example of GenAI.

Chat-based generative pre-trained transformer (ChatGPT) models: A system built with a neural network transformer type of AI model that works well in natural language processing tasks (see definitions for neural networks and Natural Language Processing below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

Transformer models: Used in GenAI (the T stands for Transformer), transformer models are a type of language model. They are neural networks and also classified as deep learning models. They give AI systems the ability to determine and focus on important parts of the input and output using something called a self-attention mechanism to help.

Self-attention mechanism: These mechanisms, also referred to as attention help systems determine the important aspects of input in different ways. There are several types and were inspired by how humans can direct their attention to important features in the world, understand ambiguity, and encode information.

Large language models (LLMs) Large language models form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools including OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s PaLM and Gemini. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent.

Computer Vision: Computer Vision is a set of computational challenges concerned with teaching computers how to understand visual information, including objects, pictures, scenes, and movement (including video). Computer Vision (often thought of as an AI problem) uses techniques like machine learning to achieve this goal.

Critical AI: Critical AI is an approach to examining AI from a perspective that focuses on reflective assessment and critique as a way of understanding and challenging existing and historical structures within AI. Read more about critical AI.

Data: Data are units of information about people or objects that can be used by AI technologies.

Training Data: This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)1​​

Foundation Models: Foundation Models represent a large amount of data that can be used as a foundation for developing other models. For example, generative AI systems use large language foundation models. They can be a way to speed up the development of new systems, but there is controversy about using foundation models since depending on where their data comes from, there are different issues of trustworthiness and bias. Jitendra Malik, Professor of Computer Science at UC Berkeley once said the following about foundation models: “These models are really castles in the air, they have no foundation whatsoever.

Human-centered Perspective: A human-centered perspective sees AI systems working with humans and helping to augment human skills. People should always play a leading role in education, and AI systems should not replace teachers.

Intelligence Augmentation (IA): Augmenting means making something greater; in some cases, perhaps it means making it possible to do the same task with less effort. Maybe it means letting a human (perhaps teacher) choose to not do all the redundant tasks in a classroom but automate some of them so they can do more things that only a human can do. It may mean other things. There’s a fine line between augmenting and replacing and technologies should be designed so that humans can choose what a system does and when it does it.

Intelligent Tutoring Systems (ITS): A computer system or digital learning environment that gives instant and custom feedback to students. An Intelligent Tutoring System may use rule-based AI (rules provided by a human) or use machine learning under the hood. By under the hood we mean the underlying algorithms and code that an ITS is built with. ITSs can support adaptive learning.

Adaptive Learning: Subject or course material is adjusted based on the performance of the learner. The difficulty of material, the pacing, sequence, type of help given, or other features can be adapted based on the learner’s prior responses.

Interpretable Machine Learning (IML): Interpretable machine learning, sometimes also called interpretable AI, describes the creation of models that are inherently interpretable in that they provide their own explanations for their decisions. This approach is preferable to that of explainable machine learning (see definition below) for many reasons including the fact that we should understand what is happening from the beginning in our systems, rather than try to “explain” black boxes after the fact.

Black Boxes: We call things we don’t understand, “black boxes” because what happens inside the box cannot be seen. Many machine learning algorithms are “black boxes” meaning that we don’t have an understanding of how a system is using features of the data when making their decisions (generally, we do know what features are used but not how they are used)There are currently two primary ways to pull back the curtain on the black boxes of AI algorithms: interpretable machine learning (see definition above) and explainable machine learning (see definition below).

Machine Learning (ML): Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

It’s important to note that in machine learning, the algorithm is doing the work to improve and does not have the help of a human programmer. It is also important to note three more things. One, in most cases the algorithm is learning an association (when X occurs, it usually means Y) from training data that is from the past. Two, since the data is historical, it may contain biases and assumptions that we do not want to perpetuate. Three, there are many questions about involving humans in the loop with AI systems; when using ML to solve AI problems, a human may not be able to understand the rules the algorithm is creating and using to make decisions. This could be especially problematic if a human learner was harmed by a decision a machine made and there was no way to appeal the decision.

Illustration of the topology of a generic Artificial Neural Network. A first column of three stacked circles labeled input pointing to a second column of four stacked circles labeled hidden pointing to a final column of two circles labeled output. All circles are the same size and a pale yellow color with a thin black outline.
Illustration of the topology of a generic Artificial Neural Network. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Neural Networks (NN): Neural networks also called artificial neural networks (ANN) and are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.

Deep Learning: Deep learning models are a subset of neural networks. With multiple hidden layers, deep learning algorithms are potentially able to recognize more subtle and complex patterns. Like neural networks, deep learning algorithms involve interconnected nodes where weights are adjusted, but as mentioned earlier there are more layers and more calculations that can make adjustments to the output to determine each decision. The decisions by deep learning models are often very difficult to interpret as there are so many hidden layers doing different calculations that are not easily translatable into English rules (or another human-readable language).

Natural Language Processing (NLP): Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

NLP technologies help in many situations that include: scanning texts to turn them into editable text (optical character recognition), speech to text, voice-based computer help systems, grammatical correction (like auto-correct or grammarly), summarizing texts, and others.

Robots: Robots are embodied mechanical machines that are capable of doing a physical task for humans. “Bots” are typically software agents that perform tasks in a software application (e.g., in an intelligent tutoring system they may offer help). Bots are sometimes called conversational agents. Both robots and bots can contain AI, including machine learning, but do not have to have it. AI can help robots and bots perform tasks in more adaptive and complex ways.

User Experience Design/User Interface Design (UX/UI): User-experience/user-interface design refers to the overall experience users have with a product. These approaches are not limited to AI work. Product designers implement UX/UI approaches to design and understand the experiences their users have with their technologies.

Explainable Machine Learning (XML) or Explainable AI (XAI): Researchers have developed a set of processes and methods that allow humans to better understand the results and outputs of machine learning algorithms. This helps developers of AI-mediated tools understand how the systems they design work and can help them ensure that they work correctly and are meeting requirements and regulatory standards.

It is important to note that the term “explainable” in the context of explainable machine learning or explainable AI, refers to an understanding of how a model works and not to an explanation of how the model works. In theory, explainable ML/AI means that an ML/AI model will be “explained” after the algorithm makes its decision so that we can understand how the model works. This often entails using another algorithm to help explain what is happening as the “black box.” One issue with XML and XAI is that we cannot know for certain whether the explanation we are getting is correct, therefore we cannot technically trust either the explanation or the original model. Instead, researchers recommend the use of interpretable models.

Thank you to Michael Chang, Ph.D., a CIRCLS postdoctoral scholar, for reviewing this post and to Eric Nentrup for support with specific definitions. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

________________________________________________
1​​​​Fusco, J. (2020). Book Review: You Look Like a Thing and I Love You. CIRCLEducators Blog. Retrieved from https://circleducators.org/review-you-look-like-a-thing/

Artificial Intelligence and Education: What We’re Up To

notebook, lightbulb, atom, pencils inside a digital circuit board imageby Pati Ruiz

I was recently asked for an overview of the AI and Education landscape and how we are participating in it. In addition to promoting equity and accountability in AI, here is a summary of our recent writing and research including key ideas from our work. We believe that AI systems should support and augment, but never replace, a human. To ensure this, emerging technology systems and tools should be developed with the input of educators, learners, and families. As always, please share your thoughts with us @EducatorCIRCLS.

Writing and Presentations

AI and the Future of Teaching and Learning | A blog series we partnered on with the U.S. Department of Education’s Office of Educational Technology

Key Ideas:

  • Educational technology is evolving to include artificial intelligence.
  • Artificial intelligence will bring “human-like” features and agency into future technologies.
  • Policy will have an important role in guiding the uses of artificial intelligence in education to realize benefits while limiting risks.
  • Artificial intelligence will enable students and teachers to interact with technology in human-like ways.
  • Individuals will find it difficult to make choices that balance benefits and risks.
  • Creating policies can strengthen how people make decisions about artificial intelligence in education.
  • Educational applications of many types will be artificial intelligence-enabled, including teaching and learning, guiding and advising, and administration and resource planning applications.
  • Use of artificial intelligence systems in school technology is presently light, allowing time for policy to have an impact on safety, equity, and effectiveness.
  • Policies should encourage teacher engagement, including the development of teachers’ trust, and their confidence to recommend not using untrustworthy artificial intelligence systems.
  • Policies should incorporate experiences for educators to shape and support their own professional learning about how to utilize artificial intelligence systems in teaching and learning.
  • Including and informing educators in design and development decisions will result in more useful and usable teacher supports.

AI or Intelligence Augmentation for Education? | Communications of the ACM 

Key Ideas:

  • We recommend a focus on intelligence augmentation (IA) in education that would put educators’ professional judgment and learners’ voice at the center of innovative designs and features.
  • An IA system might save an educator administrative time (for example, in grading papers) and support their attention to their students’ struggles and needs.
  • An IA system might help educators notice when a student is participating less and suggest strategies for engagement, perhaps even based on what worked to engage the student in a related classroom situation.
  • We hope that IA for education will focus attention on how human and computational intelligence could come together for the benefit of learners.

Artificial Intelligence 101: Covering the Basics for Educators | Digital Promise Blog

Key Ideas:

  • AI lets machines make decisions and predictions.
  • Teachers are essential to education, and AI should be used to better support them.
  • Technology often comes with ethical implications and AI is no different, educators should ask questions and investigate AI tools and systems before they adopt them into a classroom.

Teachers Partnering with Artificial Intelligence: Augmentation and Automation | Educator CIRCLS Blog 

Key Ideas:

  • Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue.
  • AI systems should support or augment, but never replace, a teacher.
    These systems should be developed with the input of teachers, students, and families.

Artificial Intelligence and Adaptivity to Strengthen Equity in Student Learning | Getting Smart

Key Ideas:

  • Educators, researchers, and developers prioritize adaptivity when it comes to emerging learning technologies.
  • Incorporating AI tools requires specific and precise inputs to generate useful outputs.
  • When practitioners, learners, researchers, and developers work together with shared values, more equitable learning is possible.

Ethical AI | EngageAI Nexus Blog 

Key Ideas:

  • Ethical considerations should be front and center throughout the development of any new AI innovation, and ethics should be central to our definition of success for AI.
  • Policies and guidelines from the government, accreditation requirements in education, and standards of professional ethics are all needed to reinforce ethics in AI.
  • Public education is also important so that end-users can make informed decisions based on a full understanding of key issues such as transparency and privacy.

Definitions | Glossary of Artificial Intelligence Terms for Educators: A glossary written for educators to reference when learning about and using artificial intelligence (AI).

Presentation | Insights on Artificial Intelligence and the Future of Teaching and Learning at the 2023 Consortium for School Network (CoSN) Conference.

Listening Sessions | AI and the Future of Learning: Listening Sessions | We supported the U.S. Department of Education’s Office of Educational Technology listening sessions about Artificial Intelligence (AI). We connected with teachers, educational leaders, students, parents, technologists, researchers, and policymakers to gather input, ideas, and engage in conversations that will help the Department shape a vision for AI policy that is inclusive of emerging research and practices while also informed by the opportunities and risks.

Ongoing Research

Emerging technology adoption framework: For PK-12 education | Educator CIRCLS Emerging Technology Advisory Board

Key Ideas:

  • A framework we co-developed with education community members to help ensure that educational leaders, technology specialists, teachers, students, and families are all part of the evaluation and adoption process for placing emerging technologies (including artificial intelligence and machine learning) in PK-12 classrooms.
  • We are currently working with League member Willy Haug, Director of Technology and Innovation to modify this framework for adoption at Menlo Park City School District.

Study | ChatGPT/GPT-4 for Developing Sample Computational Thinking Lesson Plans at North Salem School District

  • I am working with Dr. Julio Vazquez, Director of Instruction and Human Resources North Salem School District, who is working with his team to develop sample computational thinking lessons across all subject areas K-12 using ChatGPT. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process.

Artificial Intelligence Practitioner Advisory Board | A group that will explore the use of emerging technologies in classrooms, and how we might leverage technologies to better support educators and their students. We hope to foster a sense of community within the group where researchers and developers can learn along with you as we all go through the process of reviewing technologies and making recommendations on their use. This Practitioner Advisory Board is supported by two NSF projects:

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Developing Sample Computational Thinking Lessons with ChatGPT

by Pati Ruiz, Merijke Coenraad, and Judi Fusco with contributions from Julio Vazquez

What is ChatGPT?

Let’s start with some definitions, ChatGPT is commonly classified as a natural language processing model, meaning it deals with language and human speech patterns, and “generative artificial intelligence”, meaning that it is AI that creates new content — in this case, new text.

More specifically, ChatGPT is a chat-based generative pre-trained transformer. Meaning that the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer). Basically, it’s a chatbot that allows a user to ask a question in plain language and get a response in a way similar to how a human would reply.

What does this mean for education?

“ChatGPT is a good prompt for conversation.
I see this tool as a starting point for teachers and students.”
-Julio Vazquez, North Salem Central School District

Despite the precedent of banning access to ChatGPT set by New York City Public Schools in January 2023, not all school districts are following suit. Some educators believe that these AI systems and tools are out in the world and the best thing educators can do is to teach students to partner with AI tools so they can be better prepared for a technological world. For example, English teacher Cherie Shields was recently interviewed by the New York Times where she shared that she assigned students in one of her classes to use Chat GPT to create outlines for a recent essay assignment. She shared that the process helped deepen students’ understanding of the stories while also teaching them to interact with an AI system by manipulating their inputs to get the responses they were looking for. In this case, ChatGPT became a tool that can support learning when we thoughtfully include it in our lessons and also guide students in using it well.

Dr. Julio Vazquez, Director of Instruction and Human Resources, and his team are encouraging experimentation and access to ChatGPT for all faculty and staff and are thinking about how to provide students with access in a manner that will not conflict with student privacy laws. Staff members are rolling their sleeves up and starting to explore and learn about how they can use it with their students productively. In fact, they are exploring the use of ChatGPT to develop sample Computational Thinking (CT) lesson plans that the team uses as a jumping off point in their CT Pathways development process.

ChatGPT for Developing Sample Computational Thinking Lesson Plans

compass pointing north
North Salem Central School District
In a recent conversation with Dr. Vazquez, we asked him more about how he and his teachers are incorporating ChatGPT in their computational thinking lesson planning process.

Dr. Vazquez and his colleague Cynthia Sandler, Library Media Specialist, started by entering prompts into ChatGPT and seeing what came out. The searches started with prompt terms that went something like “generate a 5th grade lesson for computational thinking focusing on science.

As the team began to analyze the lesson plans that came out, they realized they needed to make adjustments. Julio shared that he and his team have become better at giving ChatGPT enough context so that the lessons that are developed are closer to what the team expects of a lesson plan and the content better aligns to both CT and content area standards. For example, a more recent lesson prompt terms included:

“write a science lesson that integrates
9-12.CT.1
Create a simple digital model that
makes predictions of outcomes. and HS-PS1-5. Apply scientific principles and evidence to explain how the rate of a physical or chemical change is
affected when conditions are varied.”

The prompt terms and outputs were documented and provided a good starting point for sparking conversation. On first pass, the team collectively agreed that they liked the structure of the generated lesson plans. Beyond format and in terms of the content of computational thinking and subject area standards, the prompt terms entered into ChatGPT also included Habits of Mind, thinking dispositions which are implemented in North Salem, as well as the use of Visible Thinking Routines.

Dr. Vazquez and his team have worked with ChatGPT to develop sample computational thinking lessons across all subject areas K-12. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process. We look forward to sharing more! Until then, do you have questions for us? Are you integrating ChatGPT in your classroom, school, or district? Let us know @EducatorCIRCLS.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Teachers Partnering with Artificial Intelligence: Augmentation and Automation

3x2 grid of colorful AI and learning related icons including students speaking in different languages, and an ipad with an award stamp

By Pati Ruiz and Judi Fusco

Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue. Our starting point is that AI systems should support or augment, but never replace, a teacher. In order to ensure this, these systems should be developed with the input of teachers, students, and families.

So, what types of AI systems do teachers want to see developed? A group of teachers from the Merlyn Mind Practitioner Advisory Board shared ideas for how AI might help teachers better support their students. One scenario emerged around students who have Individualized Education Programs or Plans (IEPs)1. In this post we will describe how an AI system might support teachers and students by automating:

  1. Planning and Supporting Preferences
  2. Monitoring
  3. Documentation

Planning and Supporting Preferences

First, a teacher could input student instructional plans into the system. Then, the system can review the plans, make recommendations, and send alerts to the teacher when something may not work for a student. In the alert, the system could provide suggestions of adaptations on lessons or assignments based on the needs of each student. For example, an AI system can scan what’s coming up in an instructional unit and alert the teacher that the website they selected does not meet the accessibility standards required by the students in the classroom. A more advanced system could also suggest an alternative option, or even better, search for multiple resources that are appropriate and let the teacher decide what resources are best suited for their students’ instructional needs. In all cases, the AI system is only helping and making suggestions that the teacher may act on.

An AI system can also allow for a broader range of inputs from students for assignments based on their needs. For example, if a student accommodation includes submitting assignments as recorded audio, but the teacher prefers written assignments, an AI system can convert the student’s audio to text so the teacher can review or grade the text. The text-to-speech tool should also allow the teacher to hear the student’s voice for a particular sentence or phrase, for example, if the translation was not successful. Alternatively, if a student needs to hear the teacher’s comments on their assignments instead of reading them, the AI system can convert the text comments into spoken text for the student to hear. To additionally help the teacher, the system might suggest comments that they had written for another student so the teacher can reuse or repurpose them. The system might also remind the teacher of a student’s preference for feedback and if the student prefers verbal feedback, the teacher could read and record the comments for that more personal touch.

Monitoring

To support teachers in providing adequate accommodations for their students, an AI system can monitor student IEP information and make automated recommendations for needed support. For example, the system could identify students who require extended time and either share a list with the teacher or make appropriate adjustments to due dates for individual students in a learning management system. Here, we point out the need for AI systems to be able to interact with other systems or be embedded within them. Additionally, the system must do this in a way that does not expose sensitive information about students to the whole class.

Related to the text-to-speech and speech-to-text ideas discussed above, an AI system can also provide individualized read-aloud capabilities for students who need that support. The system could also remind the teacher to provide tools, like headphones or closed captioning for students who need to listen to content. We firmly believe that AI systems can help by doing things that machines are good at, while continuing to enable teachers to focus on what humans do best—like developing interpersonal relationships and identifying nuanced needs. With these types of automated supports, it is important to ensure that teachers have the ability to make the final decisions about students’ needs and that students have the agency to accept and decline supports as they go.

Documentation

Supporting a classroom with students who have varying needs—whether they are documented in an IEP or not—requires a significant amount of monitoring and reporting on the part of educators. An AI system could support teachers by not only monitoring the individual requirements of students, but also documenting the adjustments and accommodations that were made for each student. This documentation could then be shared with the students’ families to provide a summary of the work that students have accomplished and how they have been supported in completing that work. Of course, a teacher would review and verify that the summary produced by the AI system is accurate and flag any issues with the write-ups that would need to be addressed by the AI design team.

By the end of the instructional unit, teachers would be able to review reports of student progress, identify what worked and what didn’t, and ensure that all students are making meaningful progress. Automating, planning, tracking, and documentation can give a teacher more time to care for students; however, given the various risks AI systems bring, it is crucial that teachers also have the capability to override an AI system when needed.

Risks

The imagined AI system described helps teachers do what they do best by supporting them to ensure their students receive the accommodations they require and then documents those accommodations. Using such systems will come with risks, and AI systems that engage with student IEP data need to have the highest level of data privacy and oversight. As we discussed earlier, educators must be involved—for example, the teacher is in charge of giving feedback, but the system may make suggestions that help the teacher give better feedback. If educator experts are not in the loop, there could be harmful consequences for students. Educators must be diligent and not assume that every accommodation determined by an AI system is correct or the best decision. AI systems lack full context and the ability to make human decisions. Educators must have oversight and be able to verify and approve every decision made by the system.

Educator Voices

This blog post presents an imagined AI system based on conversations with a group of practitioners from the Merlyn Mind Practitioner Advisory Board. We need more teachers and educators involved in these conversations, so please consider this blog post as an invitation to you to connect with us and join the conversation on the future of AI in Education. In addition to Merlyn Mind, if you are interested in getting involved, please visit the links below.

1 An IEP is a legal document in the United States that is developed for all public school children who need special education. It is created by district personnel with input from the child’s guardians and is reviewed every year. For more information see https://www2.ed.gov/about/offices/list/ocr/docs/edlite-FAPE504.html

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]