Category Archives: Teachers & Researchers

Engaging Educators in Emerging Technology Research

book

Image by Tung Lam from Pixabay

by Cassandra Kelley, Sarina Saran, Deniz Sonmez Unal, and Erin Walker

This blog post discusses the outcomes of an Educator CIRCLS workshop that disseminated computer science education research findings to practitioners while prompting broader discussions of AI in classrooms

This past summer and fall of 2024, Educator CIRCLS hosted a series of webinars, workshops, and convenings between researchers and practitioners focused on artificial intelligence (AI) literacy. Specifically, they were designed to engage participants in reflective conversations about ethics, equity, and other problems or possibilities of practice concerning the integration of AI (especially genAI) in PreK-12 education.

As part of this series, our team from the University of Pittsburgh piloted a novel strategy for research dissemination, in which we developed supplemental curricular resources or guided activities and shared them with educators in a workshop format. The goals behind these activities were twofold:

  • To facilitate discussion among educators about current research on the integration of emerging technologies that incorporate AI (e.g., robots and intelligent tutoring systems) and how they might impact the future of learning in education settings, and
  • To provide a mechanism for educators to think critically about ways to introduce elements of AI literacy to students via real world exercises that can simulate the work that researchers are doing (see Translating Research on Emerging Technologies for Educators for further background context about the design of this workshop).

During the planning stage of the workshop, we felt it was pertinent to get a better understanding of PreK-12 teachers’ experiences with professional learning for computer science (CS) education. We wanted to speak directly with them about the impact of these experiences on their practice and seek their recommendations for how these professional development programs are designed.

We interviewed 20 educators from 16 states, who taught across different grade levels and/or content areas. Most interviewees felt a disconnect with research dissemination as a form of professional learning and expressed their desire to better understand how emerging technologies connect with research-based practices and learning theories. They discussed how previous workshops they have attended either focus directly on the technology tools or on a mandated “turnkey curriculum” based on rote memorization and knowledge transfer (e.g., Advanced Placement CS course materials). Teachers expressed how they appreciated receiving curricular resources because such resources help them to stay current in this ever-evolving field. They would like to see less “direct instruction” lessons and more real-world approaches with project-based or problem-based learning (PBL) that promote inquiry—similar to what is expected in the industry. They also emphasized the need for further collaborative opportunities to ideate on promoting digital/AI literacy through their instruction.

Following our conversations with teachers, we intentionally designed a workshop with guided activities, based on research projects on emerging technologies, that could expose practitioners to existing literature and findings while potentially seeding new ideas for curricula. Our workshop design incorporated the following structure: (1) outline the theoretical framework and CS concepts, (2) have participants experience different roles (e.g., student, educator, and researcher) within inquiry-based activities, (3) share project research findings, (4) discuss implications for practice and ways to address AI literacy, and (5) reflect on the overall format of the workshop and considering how to improve the design of future workshops.

We featured two research projects:

Our first session on teachable robots presented a research project that examined middle school students’ interactions with Nao robots in mathematics instruction. Participants were asked to think about the design and implementation challenges in building a robotic dialogue system for learning from the perspective of a student, educator, and a researcher. They explored CS concepts related to Natural Language Processing (NLP) by: (1) determining keywords used in solving a math problem, (2) reviewing sample dialogue scripts and Artificial Intelligence Mark-up Language (AIML) that researchers used to program the Nao robot, and (3) interacting with prototype simulations created in Pandorabots that represented social and nonsocial versions of a chatbot. We also shared further extensions that could potentially be remixed or adapted for use with students, such as revising the dialogue by adding more social elements, writing a new script for solving a different math problem in AIML, developing a chatbot to test the code, or experimenting with a program such as Scratch to create a dialogue between two sprites.

Our second session on neuroimaging and educational data-mining presented a research project that examined how students process information while interacting with intelligent tutoring systems. A major component of this study focused on the analysis of data collected by these systems to uncover patterns or trends that can inform and potentially improve teaching and learning practices. Additionally, neuroimaging brain data was collected as a proof of concept to explore how it might be analyzed to better understand how cognition, attention, and emotion affect learning (for further background on how this equipment works, see Neuroscience in Education). Similar to the first workshop, we presented guided activities to help participants think about the design of intelligent tutoring systems and the types of data collected ; participants created their own data visualizations from sample datasets for analysis using the free educational software, Common Online Data Analysis Platform (CODAP) and categorized example brain activation images based on the corresponding levels of task difficulty. Further extension activities were shared, such as outlining specific actions that an intelligent tutoring system might take to provide feedback (e.g., hints, prompting questions, or praise) in response to student behavior and debunking “neuro-myths” in education.

At the conclusion of each workshop, we asked educators their thoughts about the potential benefits and challenges of integrating these emerging technologies in PreK-12 classrooms and what they would like future research to explore. Our goal was to hear practitioner voices and gather input for researchers and developers to consider. This led to a focused discussion on the need to promote AI literacy in education, especially to address ethics and transparency.

Key takeaways from the experience are:

  • Teachers appreciate the opportunity to learn more about innovative research projects, but they especially like the idea of being in dialogue with researchers and potentially playing a role in the work that’s being done. Many volunteered to pilot future projects exploring the implementation of curricula and/or emerging technologies with their students if invited.
  • Teachers expressed that the content in our guided activities, while rigorous, enabled them to be more reflective. They were engaged with the hands-on simulations of the research and discussed how “active learning helped to promote deeper thinking.” As one participant mentioned, the activities allowed her to “think outside of the normal pedagogy box.”
  • Teachers had mixed feelings on the relevance of the workshop content and how to bring it into their schools or classrooms. Some thought it would be challenging to implement the activities with students due to external factors and other curricular mandates. As one participant stated, “one tension with cutting-edge research is that it’s difficult to be practical in the moment. I think you’re on the right track with scaling down the technology or bringing the insights to the classroom level…this [workshop] is way more effective than most formats, but I think you would have a difficult time getting educators to opt in.” Meanwhile another participant said, “in both workshops, the concepts and practice of the teachable bot and neuroimaging was beyond the ‘here and now’ of teaching and learning, but the examination of how our current concepts of pedagogy may change as we catch up to the technology.” Additionally, several teachers discussed how the workshop offered new ways for them to think about bringing in real-world data and student-led projects to promote further inquiry and AI literacy.
  • Teachers valued the opportunity to collaborate with other educators and researchers. They liked exploring different lenses (e.g., student, teacher, and researcher) while engaging in reflective discussions about the impact of research on their practice. One teacher highlighted how it felt like a “safe space to troubleshoot uses of AI and educational data mining” and another expressed appreciation for “garnering others’ experiences to get further ideas for their own classroom.”

Based on overall positive feedback from our teacher participants, we believe this research dissemination workshop model is worth exploring with other projects, especially since educators felt they were able to take something meaningful away from the experience. As one participant stated, “I feel very fortunate to be involved in this work. I’m very happy that your team is working to push the boundaries of how we learn and teach.” This gives us hope that researchers will consider the importance of collaborating and co-designing with educators. Additionally, this work validates the need for further mediation between research and practice, which potentially can include creating new roles for “knowledge brokers” (Levin, 2013) to promote further dialogue across these boundaries in order to truly make a broader impact.

Thank you to Sarah Hampton and Dr. Judi Fusco for their thinking and feedback on this post.

References:

Levin, B. (2013, February). To know is not enough: Research knowledge and its use. Review of education, 1(1), 2-31. DOI: 10.1002/rev3.3001


About the Authors

Cassandra Kelley, Ed.D. has over fifteen years of experience in K-12 and teacher education. She earned her doctorate degree in Learning Technologies from Pepperdine University and is passionate about exploring new tools that can improve teaching and learning. She currently serves as a Broader Impacts Project Coordinator at the University of Pittsburgh and supports CIRCLS with Expertise Exchanges in the AI CIRCLS and Educator CIRCLS sub communities. Cassandra also teaches graduate courses for National University in the Master of Science in Designing Instructional and Educational Technology (MSDIET) Program.

Sarina Saran is a third-year undergraduate student at the University of Pittsburgh pursuing a B.S. in Computer Science and a B.A. in Media and Professional Communications. She is curious about the intersection of technology and communication, and she has been able to develop a greater understanding of the challenges in this area as an Undergraduate Research Assistant in the Office of Broader Impacts.

Deniz Sonmez Unal is a Ph.D. candidate in Intelligent Systems at the University of Pittsburgh. Her research focuses on modeling student cognitive states using multimodal data, including interaction logs, verbal protocol data, and neural signals to enhance the diagnostic capabilities of intelligent tutoring systems.

Erin Walker, Ph.D. is a co-PI of CIRCLS and a tenured Associate Professor at the University of Pittsburgh, with joint appointments in Computer Science and the Learning Research and Development Center. She uses interdisciplinary methods to improve the design and implementation of educational technology, and then to understand when and why it is effective. Her current focus is to examine how artificial intelligence techniques can be applied to support social human- human and human agent learning interactions.

Translating Research on Emerging Technologies for Educators

Image by mcmurryjulie from Pixabay
by Cassandra Kelley

This blog post discusses the development of an Educator CIRCLS workshop aimed to “translate” or disseminate computer science education research findings to practitioners while promoting AI literacy.

Have you ever played the telephone game, where a sentence is whispered into someone’s ear and passed from person to person, until the final person reveals the message aloud to see how closely it aligns with what was originally said? I am frequently reminded of this childhood game in my role as the Broader Impacts Project Coordinator at the University of Pittsburgh and CIRCLS, where I think about how we can “translate” research into practice for practitioners; however, the game has become much more challenging due to the technical terminology, academic jargon (e.g., research methodologies), and other contextual phrases that are often included within the message being communicated. Moreover, all of the players have individual “language barriers” (e.g., prior knowledge, experience, expertise, etc.) that add another layer of difficulty to ensure the mediated message is comprehensive for all.

My broader impacts position, inspired by the National Science Foundation (NSF) merit review criteria, was created as an avenue for broadly disseminating research on emerging technologies for teaching and learning—similar to programs such as Research Practice Partnerships (RPP) or Research Experiences for Teachers (RET). I was drawn to this opportunity because I feel it is critical not only for educators to learn about and understand education research, but also for researchers to consider the direct impact of their work on practice. I firmly believe in the importance of bridging the gap that currently exists between research and practice by promoting partnerships among all stakeholders, which can include further engagement in participatory research and involvement in co-design models.

As a former PreK-12 educator and higher education faculty member supporting pre-service teachers, my initial concern about this translation process was thinking about how students and teachers will benefit. From my own experience working in school systems, I have observed an institutional culture where research and policy are “thrown” at teachers through mandates and other recommendations. Educators’ voices are often missing from the conversation and there is not an immediate focus on how to best support their practices, which truly should center on the needs of students. I have also witnessed researchers temporarily engaging with educators for the purpose of conducting a study and then disappearing, which I personally know can feel like a one-sided transaction.

These factors led me to consider novel strategies for research dissemination that could potentially build stronger connections between researchers and practitioners. Specifically, I wanted to explore the development of supplemental curricular resources to be shared with teachers during a workshop so they could have opportunities to: (1) interact with computer science (CS) education concepts and understand their relationship within research findings, (2) experience the role of a learner and researcher, (3) engage in discussion with other educators and researchers about the impact of specific research projects on practice—especially with regard to the integration of emerging technologies, and ultimately to (4) bring elements of their professional learning back into the classroom via guided activities that could be adapted for implementation with students.

It is important to note that these goals were shaped through many discussions with practitioners, especially after having the opportunity to speak directly with 20 educators about their experiences as participants in professional development programs for CS education. I sought their recommendations for how we might design and structure a workshop to disseminate research findings via our supplemental curricular resources (see Engaging Educators in Emerging Technology Research for further details about the facilitation of this workshop). Throughout these reflective conversations, it was frequently mentioned how most programs tend to be “technocentric” and focus more on “new shiny technology tools” rather than pedagogy for classroom integration or research-based practices and learning theories. Educators advocated for further rigor and inquiry-based activities that immerse them into the research literature, paired with opportunities for collaboration and the exchange of ideas or curricular resources; each of these elements would be intentionally incorporated into our workshop design.

Additionally, I connected with members of interdisciplinary research teams to better understand their different areas of expertise and the methodologies used across projects. I had to consider the application of CS terminology and concepts within each project (many of which were new to me) and pinpoint the key areas to focus on in the translation. Fortunately, I was introduced to a new undergraduate student in the lab who was double majoring in both CS and communication. She was eager to help and became a translator for me when I wore my “learner hat,” similar to how a teacher’s assistant or tutor might provide direct instruction to dive deeper into the content. Likewise, I would then put on my “teacher hat” and explain pedagogical concepts (e.g., scaffolding, asking different levels of questions, Universal Design for Learning- UDL strategies, etc.) or learning theories (e.g., constructivism, sociocultural learning, project-based learning, etc.) while we discussed how we could take research findings and use them to develop supplemental curricula or guided activities for dissemination to educators.

A final consideration in the development of these guided activities was how to simulate the research procedures in an immersive way without the technology equipment. This was necessary because we wanted to acknowledge potential constraints of implementation in schools, such as access issues and the need for further technical support or training—not to mention how expensive these emerging technologies are. Therefore, we engaged in further conversations with the research teams about how we might develop user-friendly prototypes of simulations that educators could interact with on their own devices. Our discussions reminded us that there may be further barriers to research dissemination in the traditional schooling environment including challenges with existing curricula requirements and/or scheduling constraints. For this reason, we decided it might make more sense to frame our activities as supplemental or enrichment materials that can be adapted/remixed across a variety of settings (e.g., after school programs or summer camps).

In summary, these convenings with researchers and practitioners across what Wenger-Trayner and colleagues (2014) refer to as the “boundaries in landscapes of practice” helped us consider the institutional culture bounding each landscape. We found it extremely valuable learning from multiple perspectives and using these insights to help us identify existing boundaries and ways to collectively navigate them.

Key takeaways from the experience are:

  • Acknowledge the systemic barriers with regard to education policy and practices in different community settings.
  • Engage in learning partnerships by collaboratively negotiating and exploring the existing boundaries. This includes actively listening to all voices (e.g., researchers and practitioners) from different landscapes (e.g., PreK-12 education, higher education, and industry) to create a two-way dialogue of mutual reflection.
  • Focus less on the technology and more on the diffusion of innovative ideas as well as the AI literacy needed by ALL stakeholders for advancement of these ideas.
  • Develop immersive guided activities that promote further conversations about AI literacy while being grounded in research and learning theories. Be sure to clearly communicate these connections when translating back-and-forth and offer opportunities for reflective discussion.
  • Seek feedback at every stage of the iterative process and prioritize the community partnerships across the landscapes of practice above all.
  • Remember that the ultimate shared goal or vision is to positively impact the future of learning for students.

Since I personally identify as both a researcher and practitioner, I have learned firsthand the importance of negotiating my own experiences to build a bridge between my understanding of the teaching practice and students’ needs, while also thinking critically about advancing the field of education research. In order to bring these landscapes together, researchers must consider ways to make their work more accessible so they can get the necessary buy-in from teachers that will propel institutional change and innovation in the future of schooling and education. Likewise, educators need to keep seeking opportunities to stay abreast of current research findings, especially to help lead this exploration of new pedagogical practices or emerging technologies that can support teaching and learning. One avenue to achieve this is establishing sustained partnerships between researchers and practitioners through co-design or participatory research. Moreover, the incorporation of “intermediaries” or “knowledge brokers,” which Levin (2013) defines as “people or organizations that translate or transmit research,” similar to my position as a Broader Impacts Project Coordinator, can “play a critical role in the process of diffusing ideas and practices in education” (p. 21). It is my hope that researchers will take into account how emerging scholars, such as school administrators, academic coaches, a subgroup of PreK-12 teachers, post-doctoral students, or graduate students, might be leveraged to help spearhead this essential translation of research into practice.

Thank you to Sarina Saran, Deniz Sonmez Unal, Sarah Hampton, Dr. Erin Walker, and Dr. Judi Fusco for their thinking and feedback on this post.

References:

Levin, B. (2013, February). To know is not enough: Research knowledge and its use. Review of education, 1(1), 2-31. DOI: 10.1002/rev3.3001

Wenger-Trayner, E., Fenton-O’Creevy, M., Hutchinson, S., Kubiak, C., & Wenger-Trayner, B. (Eds.). (2014). Learning in landscapes of practice: Boundaries, identity, and knowledgeability in practice-based learning. Routledge.


About the Author

Cassandra Kelley, Ed.D. has over fifteen years of experience in PreK-12 and teacher education. She earned her doctorate degree in Learning Technologies from Pepperdine University and is passionate about exploring new tools that can improve teaching and learning. She currently serves as a Broader Impacts Project Coordinator at the University of Pittsburgh and supports CIRCLS with Expertise Exchanges in the AI CIRCLS and Educator CIRCLS sub communities. Cassandra also teaches graduate courses for National University in the Master of Science in Designing Instructional and Educational Technology (MSDIET) Program.

The Convenience of Inconvenience: Exploring Connections with Researchers and Practitioners

AI letters surrounded by electronic cords
Image by Steve Johnson on Unsplash
by Nneka McGee and Krystal Chatman

The goals of change are aspirational. In the artificial intelligence (AI) era, we see change as a catalyst that will bring about the true transformation in education many of us have sought for decades. We all agree that change is necessary, right? Therefore, it’s only a matter of time before aspirations become concrete. Though we should continue to strive for the best ideas, there is one reality we must face: The difficulty of change is an inconvenient truth.

While change can be a challenge, particularly in education, hope is still a constant. Artificial intelligence and other emerging technologies have potential, but what are optimal ways to incorporate them into learning environments? Researchers have long explored the questions and possible answers of improving educational outcomes. Practitioners apply real-world experiences that significantly impact learners. The premise of bringing these disparate groups together offers a promising solution to understanding change.

In summer and fall of 2024, the Center for Integrative Research in Computing and Learning Sciences (CIRCLS) facilitated a convening of researchers and practitioners to explore AI problems of practice in PreK-12 education (for further details, see Educator CIRCLS ’24 Summer Series). An innovative act in itself, as it is not often the explorers of educational theory (researchers) are provided with the opportunity to collaborate with the facilitators (teachers) of the suggested strategies. This convening offered an opportunity for both sides to engage at a deeper level, gaining insight into how the “other side” operates to improve their practice. Through discussions, researchers had the opportunity to convey how valuable the experiences of practitioners are to the work of advancing innovation in education. Practitioners shared their interest in research areas such as generative AI and expressed their concerns about helping students navigate the future of learning.

Problems of Practice

Dialogue was a primary focus of the convening. To bridge the distance between practice and research, CIRCLS facilitators divided attendees into groups with the goal of centering conversations on what practitioners experience within their educational ecosystems. Through in-depth discussions, researchers and practitioners identified areas of focus by posing questions such as:

  • Do we need multiple literacies?
  • How do we address accessibility issues?
  • How do we use AI tools and what are the benefits?
  • How do we include others across the spectrum to include more voices in developing AI tools?
  • How can we use AI to support educator coaching?

AI literacy was a common thread among the groups. There was a collective acknowledgement of the limitations surrounding this emerging technology given the current state of training at educational institutions. Without the appropriate requisite foundations on how to use AI effectively, its potential to transform education is unclear. Additional conversations explored the limitations practitioners face with utilizing or implementing AI tools due to policies and how use of AI in learning spaces cannot swiftly progress until practitioner decision-makers gain more knowledge and confidence in AI use. Although researchers may be working on other areas, the conversations provided essential insights into the needs and wonderings of practitioners.

Research to Action

During a second round of grouping, researchers also sought input on their research topics and potential impact in today’s classrooms. Recruiting research subjects has always been a challenge; therefore, informing practitioners of research topics opened avenues for feedback and questions related to applicability and feasibility. Several topics emerged from the group, including:

  • Collaborative learning
  • Student voice
  • Systemic challenges
  • AI scoring
  • Policy development
  • Navigating tensions
  • Centering pedagogy

Just as in the prior group discussions, AI literacy was a popular talking point, often interwoven within the topics listed above.

When groups reconvened, all agreed that more conversation was necessary. More action was also on the minds of researchers and practitioners, but the realities of change were ever present. Several participants expressed concern about the trajectory of AI in education, particularly when considering access, bias, safety, security, and environmental impact. Despite misgivings, there was an overall sense of optimism about the future of research and the contributions of practitioners in propelling use of emerging technologies forward.


About the Authors

Nneka J. McGee, J..D., Ed.D. is an educational consultant and former Chief Academic Officer dedicated to ensuring students have access and opportunities to explore AI and emerging technologies. She completed her doctoral studies on AI and its implementation in K-12 classrooms. She is a frequent speaker on AI topics and serves as a member of several AI-focused advisory boards and committees.

Krystal Chatman is a seasoned educator and advocate with over 15 years of experience in integrating technology into teaching and promoting equitable computer science education. Currently serving as the Professional Learning & Community Manager for the Computer Science Teachers Association, she is also a co-founder of the Mississippi AI Collaborative, focused on AI upskilling across the Southern U.S. A dedicated leader in broadening participation in computer science and STEM, Krystal has facilitated panels on AI at the White House, spoken at the Brookings Institution, and developed Mississippi’s first AI conference.

Ethical Use of AI – Privileging measured and deliberate thinking

Stack of books with Thinking fast and slow book by Daniel Kahneman in front
Photo by Ahmed Almakhzanji on Unsplash
by Kip Glazer

As a school leader and educator, I am deeply committed to promoting the ethical use of artificial intelligence (AI), a responsibility that we all share as AI-embedded tools become increasingly prevalent in our school systems. I strongly advocate using interactive methods, such as leveraging games to engage learners and educators in these crucial discussions (featured in this blog post by Marlon Matilla). I believe that the active participation of both researchers and practitioners is essential, and I am proud to have contributed to this vital discussion by being a part of the Educator CIRCLS community over the years.

As I think about the ethical use of AI, I can’t help but think about the design of the AI system and how it nudges the users to behave. In his book Thinking Fast and Slow, Daniel Kahneman argues that humans have System 1 thinking that reacts quickly and instinctively, while System 2 thinking reacts more slowly and deliberately. Unfortunately, our System 1 thinking (aka impulses) often overrides our System 2 thinking when making decisions, and many companies have used this to maximize profit at the expense of the consumers. As much as technology companies tout the usefulness of AI systems, I am concerned that the rapid spread of AI is amplifying the functions of System 1 thinking at the expense of System 2 thinking. Because AI prioritizes speed and volume (similar to System 1 thinking) over quality based on careful deliberation (which is the hallmark of System 2 thinking), I am concerned that we humans will not be able to avoid the temptation of choosing the quickest answers. If you believe that is not likely, I would encourage you to consider reading Nudge: The Final Edition by Thaler and Sunstein. Thaler and Sunstein argue how choice architecture influences human behaviors. They cite Kahnmen’s work extensively to clarify that even the slightest nudge easily influences all humans and can have a significant impact.

Undoubtedly, we have made significant strides in acknowledging and discussing the potential harm of AI. Our Navigating Ethical Al: Interactive Lessons and Equitable Practices for Educators webinar is a testament to how the learning sciences field is actively responding to the growing concerns about the ethical use of AI. It’s important to note that the dangers and harm of AI often do not stem from any malicious intent by its creators. Instead, they result from our automatic thinking, reinforced by Generative AI’s (GenAI) speed. However, our increased understanding and awareness can help us navigate these challenges and somewhat mitigate potential harm.

Still, I can’t help but be concerned about the proliferation of GenAI as it seems to automate and propagate products that replicate the basest human instincts. Think about the deep fakes and their ability to go viral! Think about the students attempting to shortcut their learning. Think about the educators using GenAI tools to grade all their student work to save time. Such routine behaviors are not what we typically consider to be unethical AI use. It is our automatic thinking allowing the results generated by a biased-ridden system. Even if we are aware of the potential harm of blindly trusting GenAI’s outcome, it will be difficult to eschew such influences based on simple nudges by our general environment. Most importantly, I am concerned that classroom teachers without basic AI literacy won’t be able to guide their students in this new environment to recognize the potential harm done by GenAI because they can’t compete against System 1 thinking.

To activate System 2 thinking safely and productively, teachers need support from knowledgeable leaders who can lead them. Because we recognized the importance of school leaders in implementing Systems 2 thinking, Sofía De Jesús and I wrote Framework for AI Implementation for Administrators and Leaders in K-12 Schools when we participated in the Computer Science Teachers Association’s Equity Fellowship Program. We wanted to make it easier and more manageable for the many school leaders who feel overwhelmed by the number of currently available AI-enabled tools and the feverish sales pitches encouraging them not to leave their teachers and students behind! With the framework, we hope that they can access their System 2 thinking as they consider purchasing and implementing AI-enabled tools.

Educators need to stay vigilant against the urge to choose automation and efficiency over ethical and deliberate as further AI-embedded tools are being introduced to our school ecosystems. Now more than ever before, we must activate our Systems 2 thinking to ensure we are modeling and instilling equitable values with AI in education.

References:

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Thaler, R. H., & Sunstein, C. R. (2021). Nudge: The final edition. Yale University Press.


About the Author

Kip Glazer, Ed.D. is the proud principal of Mountain View High School. A long-time practitioner participant for CIRCLS and Educator CIRCLS, she has been sharing her school leader’s perspective on various learning technologies topics, including Artificial Intelligence implementation in schools. She has been an Equity Fellow for the Computer Science Teachers Association and a member of the EngageAI Practitioner Board.

Navigating Ethical Al: Empowering Educators with Tools, Frameworks, and Critical Perspectives

Photo of Ethical AI card deck with example prompts, images, and QR codes
Photo of CRAFT Ethical Engine card game designed by Marlon Matilla
by Marlon Matilla

The Navigating Ethical Al: Interactive Lessons and Equitable Practices for Educators webinar serves as a microcosm of the broader challenges and opportunities that artificial intelligence (AI) presents in the educational landscape. The session brought together educators to explore the ethical implications of integrating AI into classrooms, highlighting the intersection between technological innovation and pedagogical responsibility.

The Ethical Imperative in AI Education

Central to the discussion was the need for educators to critically engage with AI, not just as a tool but as a complex system with far-reaching implications. Dr. Kip Glazer, principal at Mountain View High School, emphasized that understanding the technical distinctions between different types of AI—such as generative and supervised AI—is crucial for educators (see Ethical Use of AI – Privileging measured and deliberate thinking, for further thoughts from Dr. Glazer). This technical literacy forms the foundation for ethical decision-making, as educators must navigate the biases inherent in AI systems and their potential impact on students and teaching practices. The dialogue in the session reflects a growing recognition that AI’s role in education is not neutral; it is laden with ethical considerations that educators must address proactively.

Practical Engagement with AI Ethics

Assistant professor Dr. Victoria Delaney introduced the Stanford Classroom-Ready Resources About AI for Teaching (CRAFT) project, which exemplifies how these ethical considerations can be translated into classroom practice. By developing adaptable AI literacy resources, the CRAFT initiative seeks to empower teachers to integrate AI education in a way that is both practical and responsive to the needs of diverse student populations. The project underscores the importance of flexibility and customization in educational resources, recognizing that teachers must be able to tailor AI lessons to their specific classroom contexts.

This approach is further exemplified by my CRAFT Ethical Engine card game, a tool I designed to foster critical thinking and ethical reasoning among students. This game moves beyond theoretical discussions, offering a hands-on way for students to grapple with the real-world implications of AI. Through scenarios like AI in law enforcement or AI-controlled military drones, the game prompts students to consider both the benefits and risks of AI technologies, thereby cultivating a more nuanced understanding of AI ethics.

Collective Responsibility and Advocacy

The session also highlighted the collective responsibility of educators to advocate for ethical AI practices. The Educator Bill of Rights, discussed by Dr. Kip Glazer, is a testament to this advocacy. It asserts the rights of educators to have a say in the AI tools introduced into their work environments and emphasizes the need for transparency and equity in AI implementation. This document not only empowers educators to protect their professional autonomy but also ensures that AI adoption in schools does not exacerbate existing inequalities or undermine educational goals.

The session’s exploration of these themes reflects a broader narrative within education: the need for a critical, reflective approach to technology. As AI becomes increasingly integrated into classrooms, educators are not just passive recipients of these tools; they are active participants in shaping how AI is used and understood in educational settings. This requires a deep engagement with the ethical dimensions of AI, as well as a commitment to advocating for practices that are fair, transparent, and aligned with educational values.

Engaging Educators in Discussion

The CRAFT Ethical Engine card game resource presented in the session and the Educator Bill of Rights can serve as starting points for bringing educators and students into conversations about ethical issues. As the presenters emphasized in this webinar, it is important to empower educators to think critically about how to safeguard against the ethical pitfalls that these technologies can produce and bring awareness to students about potential issues.

A Unified Perspective on AI in Education

Synthesizing the insights from the session reveals a unified perspective on the role of AI in education: It is a powerful tool that holds both promise and peril. The session participants collectively underscore that the successful integration of AI into education hinges on the ability of educators to critically assess and ethically navigate these technologies. Furthermore, our conversations with educators illustrate the necessity of an ethical framework for AI in education, one that is informed by a deep understanding of the technology and a commitment to equity and fairness. It is my hope that this synthesis of ideas and the resources shared can provide guidance for educators who are navigating the complex landscape of AI. Educators need more resources to ensure they are equipped to make informed, ethical decisions that benefit both their students and the broader educational community.


About the Author

Marlon Matilla is an educator dedicated to advancing data-driven and technology-focused learning in K-12 STEM education. Since 2015, he has taught mathematics, computer science, and cybersecurity with a strong emphasis on hands-on learning. As a CIRCLS Educator Fellow, he has contributed to AI education initiatives, including the co-design of ethical AI resources through Stanford’s CRAFT Fellowship. His recent publication, Optimizing Breakfast Choices: Leveraging Data Analytics in Packaged Foods for Informed Student Nutrition Decisions, supported by the University of Arkansas’ NSF-funded Data Analytics Teacher Alliance RET program, is published in the ASEE Professional Engineering Education Repository. Committed to merging research with practice, Marlon (aka Matt) aims to continue as a researcher-educator, fostering data literacy and ethical AI technology use in education.

CRAFTing Better Learning Experiences: Infusing GenAI in Education Effectively and Ethically

CRAFT Framework in table format
CRAFT Framework by Andrew Fenstermaker, Drew Olsson and Sarah Hampton
by Andrew Fenstermaker, Drew Olsson, and Sarah Hampton

Introduction

Generative artificial intelligence (GenAI) stands to be a disruptive technology in education and all facets of our daily life. While this technology offers significant advantages for teaching and learning, it hinders the process when used without a full understanding of how the technology works and how to evaluate the content generated. The educator must remain the expert, advocate, arbiter, human in the loop identifying why and when the technology gets used, and the critical evaluator to uphold the best of our human ideals.

CRAFT Framework Overview

Using GenAI to augment the lesson design process can seem overwhelming. From composing and revising prompts to evaluating the outputs, integrating GenAI requires a new set of literacy skills. CRAFT was collaboratively designed by Andrew Fenstermaker, Drew Olsson, and Sarah Hampton and augmented using GenAI. The framework serves as a step-by-step roadmap that scaffolds the process of infusing GenAI with the learning sciences to improve learning experiences ethically.

1. Create
The first step in the framework is to create a lesson plan using GenAI prompting that is grounded in learning sciences based on a specific standard, age group or grade level, and time frame. Often, we start with a basic prompt providing no persona or context for GenAI to use in its algorithm as it generates the output. As you can see in this example of generating a lesson plan on the main idea, the chatbot makes inferences about the grade level being taught, length of time, and materials available. The chatbot is simply following its algorithm to predict the next word in its sequence of constructing a complete lesson plan.

Giving a chatbot a persona, such as an expert teacher, and providing more details up front can enhance its recommendations. Reviewing the output from the Detailed Prompt example, you will see that the targeted grade level, specific standard, and length of time are now tailored to our prompt. We can improve the outputs further by including a request for evidence-based best education practices from learning sciences research up front. Therefore, the goal of the create step in the CRAFT framework is to underpin a detailed prompt with learning sciences.

Comparison table of basic prompts, detailed prompts, and detailed + learning sciences + chain of thought prompts

In addition, the results from the prompt with learning sciences can be optimized using chain-of-thought prompting. Chain-of-thought prompting is a technique that allows us to take a complex task and decompose it into a series of logical steps to increase the performance of the large language model (chatbot). As you explore the Detailed Prompt + learning sciences + Chain-of-Thought prompts, you will see that a series of 11 chain-of-thought prompts were used to continually refine the output. Decomposition methods were used as we broke our complex task into smaller chunks, from identifying effect size and selecting an evidence-based learning strategy to incorporating elements aligned with student interests and crafting sentence starters to scaffold the process for students.

2. Review
Advancements in GenAI capabilities and detailed prompting strategies may lull the user in “falling asleep at the wheel.” That is, over-relying on outputs without going through the process of refining its outputs for a given context with intentional consideration. The CARE framework is designed to critically evaluate each output for Clarity, Accuracy, Relevance, and Ethics.

  • Clarity– A lack of clarity in GenAI output might mean the output is not appropriate for a given audience such as overly wordy sentences for a 5th grade class.
  • Accuracy– It is well known that AI makes things up, from citing studies that never existed to stating facts there are blatantly false. It is up to the user to fact check GenAI outputs so that misinformation is not shared in a classroom setting.
  • Relevance– Often, an AI’s first output will not generate a response that matches the intent of the input given. Continue refining your prompts until the output fits your needs.
  • EthicsGenAI is trained on biased data and can be racist. Inspecting outputs for possible stereotypical depictions or biased results is imperative to uphold integrity and respect for all.

3. Amplify
Once we have reviewed our lesson plan grounded in learning sciences using the CARE framework, we are ready for the Amplify stage of CRAFT. The recent National Ed Tech Plan highlights three different divides with instructional technology: Access, Design and Use. The COVID-19 pandemic sparked a rapid influx of instructional technology, decreasing the access divide but exposing prominent divides in how teachers were designing digital learning as well as how students were using the digital tools and resources.

The amplify stage fosters opportunities for teachers to work towards minimizing the Design and Use divides by using GenAI prompts centered around the SAMR (substitution, augmentation, modification, redefinition) model. The graphic below illustrates the progression from beginning with a standard, using chain-of-thought prompting grounded in learning sciences, reviewing with the CARE framework, and concluding with instructional technology enhancements through the SAMR model.

Flow chart demonstrating how a standard can be used to prompt AI with both the learning sciences and SAMR framework for lesson plan design

As outlined before, we continue to use chain-of-thought prompting to further refine our outputs. Within our amplified SAMR lesson plan, you can see that the additional prompts help refocus the large language model (chatbot) when the algorithm deviates from our original vision and allow us to select the specific level(s) of SAMR we wish to incorporate within our final lesson. Just as we did before, we must leverage the review stage to critically evaluate the outputs using the CARE framework.

4. Fine-tune
Leveraging the fine-tune stage allows us to ensure that our lesson is both effective and equitable by carefully evaluating the integration of instructional technology. This includes considering the context of use, implementation strategies, sustainability factors, and inclusivity for all learners. By focusing on these key areas, we can enhance the technology’s impact and ensure it aligns with our pedagogical goals.

Graphical representation of technology considerations in the fine-tune stage including context, implementation, sustainability, and inclusivity.

5. Transform
The final stage of the CRAFT framework does not require additional prompting or reviewing of outputs. Instead, by implementing the filtered enhancements, teachers transform the learning experience, making it more engaging, interactive, and effective for their students.

Conclusion and Call to Action

As GenAI continues to evolve, it is essential for educators to remain the experts in their classrooms and use technology as a tool to enhance, not dictate, their practices. We encourage you to employ the CRAFT Framework to center students in the learning design process. It is through the intentional underpinning of learning sciences that we can remove barriers to create rich learning experiences for all students.

Resources
Link to PDF of CRAFT (two pager)


About the Authors

Andrew Fenstermaker is the Instructional Technology Coordinator for the Iowa City School District. A perpetual learner who infused emerging technologies into his own classroom for ten years now works to empower educators through dynamic professional development, one-on-one coaching, and innovative lesson design that centers students and removes barriers to success. He is a Google Certified Coach and Innovator, leading efforts locally and nationally on adopting and scaling computational thinking and AI in education while sharing key deliverables through presentations and publications.

Drew Olsson is the Technology Integration Coordinator for the Agua Fria High School District. An advocate for staff and student AI Integration, mindful EdTech implementation, and building tech literacy for all. He taught math and computer science for 9 years before moving into his current role where he services 5 comprehensive high schools and over 10,000 students. He is invested in providing powerful learning opportunities for all students so that they may thrive in an increasingly techno-centric world. Drew holds Master’s Degrees in Secondary Education and Educational Leadership from Arizona State University.

Sarah Hampton is a Technology and Curriculum Specialist for the Greenbrier County School District specializing in secondary math education. Prior to her current role, she brought passion for evidence-based instructional strategies and thoughtful technology integration to her middle and high school math and science classrooms. A veteran educator of 15+ years, Sarah works to bring the benefits of education research to more students through embedded professional development in her district and through collaboration with researchers and educators at the Center for Integrative Research in Computing and Learning Sciences.

Overcoming Barriers to Teaching Regulation of Learning

by Sarah Hampton and Dr. Dalila Dragnić-Cindrić

Photo by Allison Shelley for EDUimages
In our two previous blog posts, we talked about students’ individual self-regulated learning (SRL), group-level, social regulation of learning (SoRL), and why it’s important to explicitly teach both alongside our content (Hampton & Dragnić-Cindrić, 2023a, 2023b). The link between students’ effective self-regulated learning and successful academic and life outcomes has been well documented (Dent & Koenka, 2016). If that’s the case, and if we know the benefits, why don’t more teachers focus on teaching it?

In this post, we will explore some of the barriers and possible solutions for teaching regulation of learning that we have seen in K-12 and higher education classrooms. Importantly, some of the barriers that surfaced during our conversations are within a teacher’s control, and others are not (e.g., district or state policies). In the spirit of teacher empowerment, this post focuses on the barriers and solutions within teachers’ control.

Barrier 1: Comprehensive instruction of SRL and or SoRL requires the teacher to give up control, an uncomfortable idea for many of us.

Suggested Solution: Gradually but steadily release control of learning to the students, making them responsible for their own learning.

Elaboration: If we want students to take more responsibility for their own learning, then we must give responsibility back to them. Doing so gradually but steadily can help teachers overcome their own discomfort with releasing control as well as ease students into new, more active roles in their own learning.

In a recent study conducted in high school physics classrooms, Dalila and colleagues showed that the level of teachers’ control over collaborative groups’ dialogues impacted groups’ SoRL. Students in groups in which the teacher controlled the conversation engaged in less conversation with each other and enacted less SoRL (Dragnić-Cindrić et al., 2023).

For Sarah, our conversation about this study led to a somewhat sobering realization. As a reflective practitioner, she said, “I realized that I had been robbing my students of taking more responsibility for their learning because I was holding onto so much of it. In an effort to maximize our learning minutes, head off classroom disruptions at the pass, and ensure successful learning outcomes, I have hoarded control of my students’ learning experiences.”

If we want students to take more ownership, we must shift more control over learning back to them. Gradual release of control means providing more support and guidance at the beginning, then fading the support as students demonstrate increased capability to manage their own learning. During our conversations on this topic, Sarah said her “aha” moment came when Dalila pointed out that regulation happens whether a teacher acknowledges it or not. “You’re modeling regulation whether you’re intentional about it or not. You’re either modeling good examples or bad examples. It’s about taking advantage of the opportunity to help students learn how to regulate their learning individually and with others.”

That leads us to the next barrier…

Barrier 2: Teachers may not be sure how to teach regulation of learning.

Suggested Solution: To teach regulation of learning, include both modeling and direct instruction of regulation of learning.

Elaboration: As teachers, we have made a career in education and are most likely effective at regulating our own learning. We have probably automated many regulation strategies and don’t even need to think about them, which can make it difficult to understand the perspective of students who find learning how to learn challenging. Because we haven’t had to explicitly think about regulation to navigate learning challenges in our own lives, we may not know how to model and explicitly articulate learning strategies to our students.

Additionally, most teacher preparation programs do not include courses on how to teach regulation of learning. We also recognize that teachers with many demands on their time don’t have the luxury of independently learning about best practices for teaching regulation and developing worksheets, prompts, reflections, etc., to help their students with regulation of learning. Still, there are some steps that can be taken to improve students’ regulation of learning through modeling and direct instruction (Paris & Paris, 2001).

  1. Reflect on your own learning strategies and take time to model them for your students. Narrate your own thought processes and explain how you approach and solve problems. Learn more about regulation of learning and how to teach it. We gave a brief overview in the first post of the series, but we have included more teacher-friendly resources in the Additional Resources section below. For a self-paced professional learning experience, you might like the “Self-regulation professional development module” by the Students at the Center Hub.
  2. Explicitly teach students effective regulation of learning and learning strategies you’re already familiar with, such as:
  • Modify your learning environment and structure study time: Studying is more effective if you eliminate distractions and study in short time intervals followed by brief breaks. Put your phone away and engage in a focused 15-minute study session followed by a 5-minute break (Yes, this is the time to check that phone!)
  • Summarize text and tell someone about it: When studying new material, an effective approach is to read the text and then write a summary of the main points or tell someone else, a friend or a family member, about it. Go into details as much as you can. If there are things you cannot recall, that’s a sign you might want to read that part again. Many students rely exclusively on text highlighting and re-reading. These strategies are ineffective because they create “illusions of knowing,” a false sense that you have learned the material.
  • Quiz yourself to memorize new words or concepts: In subjects where memorizing content is needed (e.g., studying vocabulary), quizzing works! Quiz yourself and ask others to quiz you.
  • Seek help when you get stuck: It is okay to ask for help, and smart students do! If you are stuck, ask others to explain how they approach similar problems. Show your teacher your work and walk them through it — they will be happy to help you identify the rough spots and help you work through them.

We provide links to the additional learning strategy resources below.

Barrier 3: From a short-term perspective, teaching regulation of learning feels like a less valuable use of time than teaching content.

Suggested Solution: Embrace teaching regulation of learning as an inextricable part of teaching your content’s process standards. In other words, part of the standards we’re expected to teach requires students to engage in regulation of learning (see examples below).

Elaboration: Regulation of learning isn’t directly assessed, so when it comes to spending 10 minutes of class time, teachers are likely to choose learning content over learning how to learn. However, hyperfocusing on content standards over process standards is more short-sighted than short-term. The research suggests that teaching regulation will pay content learning dividends in a single school year (Dignath & Büttner, 2008). Beyond that, learning how to navigate challenges and find a way to learn alone and together will benefit learners their entire lives.

Many school districts are adopting big-picture mission statements and portraits of a graduate. Most have a line about creating self-sufficient lifelong learners. Teaching regulation of learning is a critically important way to spend your class time. Justify that time (to yourself and others!) using your existing state and national standards and school, district, and/or state mission statements. Here are some examples:

The National Council of Teachers of Mathematics (NCTM) problem-solving process standards call for teachers to:

  • Allow students to apply and adapt a variety of appropriate strategies to solve problems
  • Allow students to monitor and reflect on their own and others’ strategies for solving problems
  • The National Council for Teachers of English calls for students to:

  • Participate as knowledgeable, reflective, creative, and critical members of a variety of literacy communities.
  • The National Science Teaching Association (NSTA) emphasizes that:

  • Learning is an active, constructive process, and not a receptive one;
  • High quality science, engineering, mathematics, and technology education fosters students’ 21st-century skills of collaboration, problem solving, communication, and creative thinking;
  • North Carolina Department of Public Instruction’s “A Portrait of a Graduate” emphasizes that in addition to academic content, schools must be more intentional about fostering durable skills critical for students’ success, including learner’s mindset, personal responsibility, and collaboration.

    These are a few of the challenges we have identified. What other barriers prevent you or your colleagues from teaching regulation of learning? How have you navigated these challenges in your classroom? We would love to hear your thoughts — tweet us at @EducatorCIRCLS!

    References

    Dent, A.L., & Koenka, A.C. (2016). The Relation Between Self-Regulated Learning and Academic Achievement Across Childhood and Adolescence: A Meta-Analysis. Educational Psychology Review, 28, 425–474. https://doi.org/10.1007/s10648-015-9320-8

    Dignath, C. & Büttner, G. (2008). Components of fostering self-regulated learning among students. A meta-analysis on intervention studies at primary and secondary school level. Metacognition and Learning, 3, 231–264. https://doi.org/10.1007/s11409-008-9029-x.

    Dragnić-Cindrić, D., Lobczowski, N. G., Greene, J. A., & Murphy, P. K. (2023). Exploring the teacher’s role in discourse and social regulation of learning: Insights from collaborative sessions in high-school physics classrooms. Cognition and Instruction, 1–32. https://doi.org/10.1080/07370008.2023.2266847

    Hampton S., & Dragnić-Cindrić, D. (2023a). Regulation of learning: What is it, and why is it important? Center for Integrative Research in Computing and Learning Sciences. https://circls.org/educatorcircls/regulation-of-learning-what-is-it-and-why-is-it-important

    Hampton S., & Dragnić-Cindrić, D. (2023b). Social Regulation of Learning and Insights for Educators. Center for Integrative Research in Computing and Learning Sciences. https://circls.org/educatorcircls/regulation-of-learning-what-is-it-and-why-is-it-important

    North Carolina Department of Public Instruction. (n.d.). Portrait of a graduate. https://www.dpi.nc.gov/districts-schools/operation-polaris/portrait-graduate#Tab-DurableSkills-4800

    Paris, S. G., & Paris, A. H. (2001). Classroom applications of research on self-regulated learning. Educational Psychologist, 36(2), 89–101. https://doi.org/10.1207/S15326985EP3602_4

    Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
    Suggested citation format: Hampton, S., & Dragnić-Cindrić, D. (2023). Overcoming barriers to teaching regulation of learning. Educator CIRCLS Blog. Retrieved from

    Acknowledgements

    This material is based upon work supported by the National Science Foundation grant number 2101341 and grant number 2021159. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

    Additional Resources:

    Elaboration | How Expanding On Ideas Increase Outcomes | Science of Learning Series
    Interleaving | Mixed up Practice | Science of Learning Series
    Self-regulated learning: The technique smart students use.
    Spacing | Revisit Material To Boost Outcomes | Science of Learning Series
    Teacher Support of Co- and Socially-Shared Regulation of Learning in Middle School Mathematics Classrooms

    Glossary of Artificial Intelligence Terms for Educators

    A blue robot looking windup toy with a metal coil going across its head. Red feet
    Photo by Rock’n Roll Monkey on Unsplash
    by Pati Ruiz and Judi Fusco

    This glossary was written for educators to reference when learning about and using artificial intelligence (AI). We will start with a definition of artificial intelligence and then provide definitions of AI-related terms in alphabetical order. This glossary was last updated on March 31, 2024

    Artificial Intelligence (AI): AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways, two of the primary ways are: (1) through the use of rules provided by a human (rule-based systems); or (2) with machine learning algorithms. Many newer AI systems use machine learning (see definition of machine learning below).

    ________________________________________________
    Algorithm: Algorithms are the “brains” of an AI system and what determines decisions in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based where human programmers give the rules.

    Artificial General Intelligence (AGI): Artificial general intelligence has not yet been realized and would be when an AI system can learn, understand, and solve any problem that a human can.

    Artificial Narrow Intelligence (ANI): AI can solve narrow problems and this is called artificial narrow intelligence. For example, a smartphone can use facial recognition to identify photos of an individual in the Photos app, but that same system cannot identify sounds.

    Generative AI (GenAI): A type of machine learning that generates content, currently such as text, images, music, videos, and can create 3D models from 2D input. See ChatGPT definition, ChatGPT is a specific example of GenAI.

    Chat-based generative pre-trained transformer (ChatGPT) models: A system built with a neural network transformer type of AI model that works well in natural language processing tasks (see definitions for neural networks and Natural Language Processing below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

    Transformer models: Used in GenAI (the T stands for Transformer), transformer models are a type of language model. They are neural networks and also classified as deep learning models. They give AI systems the ability to determine and focus on important parts of the input and output using something called a self-attention mechanism to help.

    Self-attention mechanism: These mechanisms, also referred to as attention help systems determine the important aspects of input in different ways. There are several types and were inspired by how humans can direct their attention to important features in the world, understand ambiguity, and encode information.

    Large language models (LLMs) Large language models form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools including OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s PaLM and Gemini. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent.

    Computer Vision: Computer Vision is a set of computational challenges concerned with teaching computers how to understand visual information, including objects, pictures, scenes, and movement (including video). Computer Vision (often thought of as an AI problem) uses techniques like machine learning to achieve this goal.

    Critical AI: Critical AI is an approach to examining AI from a perspective that focuses on reflective assessment and critique as a way of understanding and challenging existing and historical structures within AI. Read more about critical AI.

    Data: Data are units of information about people or objects that can be used by AI technologies.

    Training Data: This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)1​​

    Foundation Models: Foundation Models represent a large amount of data that can be used as a foundation for developing other models. For example, generative AI systems use large language foundation models. They can be a way to speed up the development of new systems, but there is controversy about using foundation models since depending on where their data comes from, there are different issues of trustworthiness and bias. Jitendra Malik, Professor of Computer Science at UC Berkeley once said the following about foundation models: “These models are really castles in the air, they have no foundation whatsoever.

    Human-centered Perspective: A human-centered perspective sees AI systems working with humans and helping to augment human skills. People should always play a leading role in education, and AI systems should not replace teachers.

    Intelligence Augmentation (IA): Augmenting means making something greater; in some cases, perhaps it means making it possible to do the same task with less effort. Maybe it means letting a human (perhaps teacher) choose to not do all the redundant tasks in a classroom but automate some of them so they can do more things that only a human can do. It may mean other things. There’s a fine line between augmenting and replacing and technologies should be designed so that humans can choose what a system does and when it does it.

    Intelligent Tutoring Systems (ITS): A computer system or digital learning environment that gives instant and custom feedback to students. An Intelligent Tutoring System may use rule-based AI (rules provided by a human) or use machine learning under the hood. By under the hood we mean the underlying algorithms and code that an ITS is built with. ITSs can support adaptive learning.

    Adaptive Learning: Subject or course material is adjusted based on the performance of the learner. The difficulty of material, the pacing, sequence, type of help given, or other features can be adapted based on the learner’s prior responses.

    Interpretable Machine Learning (IML): Interpretable machine learning, sometimes also called interpretable AI, describes the creation of models that are inherently interpretable in that they provide their own explanations for their decisions. This approach is preferable to that of explainable machine learning (see definition below) for many reasons including the fact that we should understand what is happening from the beginning in our systems, rather than try to “explain” black boxes after the fact.

    Black Boxes: We call things we don’t understand, “black boxes” because what happens inside the box cannot be seen. Many machine learning algorithms are “black boxes” meaning that we don’t have an understanding of how a system is using features of the data when making their decisions (generally, we do know what features are used but not how they are used)There are currently two primary ways to pull back the curtain on the black boxes of AI algorithms: interpretable machine learning (see definition above) and explainable machine learning (see definition below).

    Machine Learning (ML): Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

    It’s important to note that in machine learning, the algorithm is doing the work to improve and does not have the help of a human programmer. It is also important to note three more things. One, in most cases the algorithm is learning an association (when X occurs, it usually means Y) from training data that is from the past. Two, since the data is historical, it may contain biases and assumptions that we do not want to perpetuate. Three, there are many questions about involving humans in the loop with AI systems; when using ML to solve AI problems, a human may not be able to understand the rules the algorithm is creating and using to make decisions. This could be especially problematic if a human learner was harmed by a decision a machine made and there was no way to appeal the decision.

    Illustration of the topology of a generic Artificial Neural Network. A first column of three stacked circles labeled input pointing to a second column of four stacked circles labeled hidden pointing to a final column of two circles labeled output. All circles are the same size and a pale yellow color with a thin black outline.
    Illustration of the topology of a generic Artificial Neural Network. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

    Neural Networks (NN): Neural networks also called artificial neural networks (ANN) and are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.

    Deep Learning: Deep learning models are a subset of neural networks. With multiple hidden layers, deep learning algorithms are potentially able to recognize more subtle and complex patterns. Like neural networks, deep learning algorithms involve interconnected nodes where weights are adjusted, but as mentioned earlier there are more layers and more calculations that can make adjustments to the output to determine each decision. The decisions by deep learning models are often very difficult to interpret as there are so many hidden layers doing different calculations that are not easily translatable into English rules (or another human-readable language).

    Natural Language Processing (NLP): Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

    NLP technologies help in many situations that include: scanning texts to turn them into editable text (optical character recognition), speech to text, voice-based computer help systems, grammatical correction (like auto-correct or grammarly), summarizing texts, and others.

    Robots: Robots are embodied mechanical machines that are capable of doing a physical task for humans. “Bots” are typically software agents that perform tasks in a software application (e.g., in an intelligent tutoring system they may offer help). Bots are sometimes called conversational agents. Both robots and bots can contain AI, including machine learning, but do not have to have it. AI can help robots and bots perform tasks in more adaptive and complex ways.

    User Experience Design/User Interface Design (UX/UI): User-experience/user-interface design refers to the overall experience users have with a product. These approaches are not limited to AI work. Product designers implement UX/UI approaches to design and understand the experiences their users have with their technologies.

    Explainable Machine Learning (XML) or Explainable AI (XAI): Researchers have developed a set of processes and methods that allow humans to better understand the results and outputs of machine learning algorithms. This helps developers of AI-mediated tools understand how the systems they design work and can help them ensure that they work correctly and are meeting requirements and regulatory standards.

    It is important to note that the term “explainable” in the context of explainable machine learning or explainable AI, refers to an understanding of how a model works and not to an explanation of how the model works. In theory, explainable ML/AI means that an ML/AI model will be “explained” after the algorithm makes its decision so that we can understand how the model works. This often entails using another algorithm to help explain what is happening as the “black box.” One issue with XML and XAI is that we cannot know for certain whether the explanation we are getting is correct, therefore we cannot technically trust either the explanation or the original model. Instead, researchers recommend the use of interpretable models.

    Thank you to Michael Chang, Ph.D., a CIRCLS postdoctoral scholar, for reviewing this post and to Eric Nentrup for support with specific definitions. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

    Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
    Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

    ________________________________________________
    1​​​​Fusco, J. (2020). Book Review: You Look Like a Thing and I Love You. CIRCLEducators Blog. Retrieved from https://circleducators.org/review-you-look-like-a-thing/

    Social Regulation of Learning and Insights for Educators

    by Sarah Hampton and Dr. Dalila Dragnić-Cindrić

    In the first post of this series (Hampton & Dragnić-Cindrić, 2023), we focused primarily on individual student’s self-regulated learning (SRL), explained the related key terms and ideas, and discussed why it is important to teach SRL alongside subject content. In this post, we will focus on regulation of learning in small, collaborative groups.

    Social regulation of learning (SoRL) occurs when students in collaborative groups purposefully select, use, and, if necessary, adjust their collective actions and behaviors to achieve shared learning goals (Hadwin et al., 2018). Navigating group dynamics and collaborating well are skills all students need. SoRL is an essential prerequisite for successful collaboration (Dragnić-Cindrić & Greene, 2021).

    Social regulation of learning (Figure 1) unfolds through the same three loose phases of learning as SRL (i.e., preparation, execution, reflection), and it has the added complexity of coordinating with others. Collaborative groups enact their SoRL by relying on one or more of the following modes of regulation: self-regulated learning (SRL), coregulated learning (CoRL), and socially-shared regulation of learning (SSRL).

    These three modes of regulation vary in their focus:

  • SRL focuses on what “I” do within the group related to my own learning;
  • CoRL focuses on what “you” do and how I can temporarily help you with your regulation; and
  • SSRL focuses on what “we” do together to propel joint learning.
  • CoRL occurs when one group member temporarily supports one or more others in the group, with the goal of eventually transitioning regulation of learning to the regulated student(s). For example, if a student is repeatedly distracted by looking at another group, a teammate might prompt them a few times to pay attention to their own group. After a few prompts, the “regulated” student might decide to switch seats to fully engage with the group and avoid further disruption.

    SSRL is characterized by the equal and balanced participation of all group members in the group’s regulation of learning. During SSRL, group members build on each other’s actions and statements to create synergistic outcomes.

    Figure 1. Social regulation of learning infographic

    Note: This graphic shows a three-person collaborative group engaging in social regulation of learning. The group first plans how to do the task. Then, they attempt to execute their plan and fail. They reflect on what went wrong and what they needed to change. Finally, they try again and achieve their goal.

    When you consider all the ways learners must regulate during group work—self, others, and each other—it’s not surprising that successful collaboration can be challenging. Importantly, students don’t have to regulate their learning all the time. In fact, when students are satisfied with their learning progress, there is no need to regulate. Typically, regulation unfolds as a response to an encountered challenge. For example, some group members might lose their motivation for the task and want to quit. Other group members might need to actively encourage them and point out the progress the group made so far to get them to re-engage.

    Most of what we’ve discussed so far has been about what students in the group are doing to regulate their own learning. It is also possible that someone outside the group—the teacher or even a student from another group—might need to help with the group’s regulation of learning. This is called external regulation of learning. For example, a teacher may decide to step in if a group is engaging in excessive off-task behavior or if they are repeatedly trying an ineffective learning strategy.

    Such an intervention involves trade-offs between the teacher’s control over the group’s learning and allowing adequate space and time for the students to learn how to socially regulate their own learning (Dragnić-Cindić et al., 2023). Think of it like this— when a child first learns to tie shoelaces, it’s clumsy and time-consuming and requires multiple tries with some help. It would be much faster if a parent tied them instead. However, if the parent repeatedly makes the choice to step in and tie the child’s shoelaces, then the child never has the opportunity to learn. Given enough space and time, the child eventually learns to tie them quickly, and the parent never has to intervene again. Similarly, the teacher’s job is to discern when and how to offer the least assistance possible to help students grow in SRL, CoRL, and SSRL and recognize which mode of regulation is the most appropriate in a given situation.

    The quality of a group’s regulation of learning is closely connected to the group climate (Dragnić-Cindrić & Greene, 2021), a persistent pattern of group members’ interactions, emotions, and behaviors that remains stable over time. Successful groups tend to have a positive group climate characterized by positive interactions. For example, group members praise each others’ ideas, offer encouragement when mistakes are made, and joke and laugh together. It is important to establish a positive climate from the first collaborative session, and clear group norms and teacher modeling of desired interactions can help with that. Teachers should step in when off-task or negative behaviors hurt the group climate or even the classroom culture in ways that make growth unlikely.

    In other words, rather than managing students directly, teachers should manage the classroom conditions that allow students to manage their own learning. We include research-based teachers’ moves in the table below (Table 1).

    Table 1. Research-based recommendations for teachers

    In the final blog post of this series, we’ll explore some barriers and potential solutions for teaching regulation of learning in our classrooms. Meanwhile, we would love to hear from you. Are you already incorporating some teacher moves that facilitate regulation of learning in your classroom? If so, which ones? If not, which moves could you implement easily? Let us know by engaging with us on social media @EducatorCIRCLS!

    Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
    Suggested citation format: Hampton, S., & Dragnić-Cindrić, D. (2023). Social Regulation of Learning and Insights for Educators. Educator CIRCLS Blog. Retrieved from https://circls.org/educatorcircls/social-regulation-of-learning-and-insights-for-educators

    Acknowledgements
    This material is based upon work supported by the National Science Foundation grant number 2101341 and grant number 2021159. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

    Resources

    Dragnić-Cindrić, D., & Greene, J. A. (2021). Social regulation of learning as a base for successful collaboration. (Rapid Community Report Series). Digital Promise, International Society of the Learning Sciences, and the Center for Integrative Research in Computing and Learning Sciences. https://repository.isls.org//handle/1/6854

    Dragnić-Cindrić, D., Lobczowski, N. G., Greene, J. A., & Murphy, P. K. (2023). Exploring the teacher’s role in discourse and social regulation of learning: Insights from collaborative sessions in high-school physics classrooms. Cognition and Instruction, 1-32. https://doi.org/10.1080/07370008.2023.2266847

    Hadwin, A. F., Järvelä, S., & Miller, M. (2018). Self-regulation, co-regulation, and shared regulation in collaborative learning environments. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (pp. 83–105). Routledge.

    Hampton S., & Dragnić-Cindrić, D. (2023). Regulation of learning: What is it, and why is it important? Center for Integrative Research in Computing and Learning Sciences. https://circls.org/educatorcircls/regulation-of-learning-what-is-it-and-why-is-it-important

    Regulation of Learning: What is it, and why is it Important?

    by Sarah Hampton and Dr. Dalila Dragnić-Cindrić

    How many of us want our students to be highly motivated learners? Proactive? Goal-directed? Strategic? Perseverant? Adaptive? We’ve heard teachers across subjects and grade bands say that these are difference-making characteristics that students need to be successful in and out of the classroom. In educational research terms, students who demonstrate these qualities are skilled at regulating their learning. Researchers have dedicated significant efforts to understanding self-regulated learning skills and the underlying processes. In a series of three blog posts, we take a closer look at regulation of learning, why it matters to students and teachers, and how educators might foster it in classroom settings.

    As Timothy Cleary describes in The Self-Regulated Learning Guide (2018), self-regulated learners “want to perform well on some activity” and “purposefully and strategically figure out ways to achieve their goals…despite experiencing challenges, barriers, or struggles, [they] continuously find a way to learn” (pp. 9-10). Interestingly, they do this by repeating three fairly simple phases (Figure 1):

    1. Before the Learning (Preparation Phase)–self-motivating and figuring out how to approach the task;
    2. During the Learning (Execution Phase)–maintaining motivation, using strategies to complete the task, and self-monitoring thinking and actions during learning;
    3. After the Learning (Reflection Phase)–determining how well the selected strategies helped accomplish the task and deciding how to improve next time.

    Three phases of self-regulated learning cycle: preparation, execution, reflection.
    Figure 1. Phases of self-regulated learning.

    Note: This figure shows phases of self-regulated learning and steps students can take throughout this cyclical process. Adapted from the original figure The Cycle of Self-Regulated Learning by Karen Kirk from Develop Self-Regulated Learners: Choosing and Using the Best Strategies for the Task. Published under the Creative Commons license.

    Self-regulated learning is cyclical and its phases are iterative and loosely sequenced; students might move from one phase to the next or revisit previous phases as needed. Thinking about strategies involves thinking about learning strategies (e.g., ignoring distractions, re-reading task instructions) as well as best content area strategies to use in a given task.

    But what do we do when our students aren’t particularly skilled in regulating their learning? Can regulation be learned? Can regulation be taught? Should it be taught?

    Some learners figure out how to regulate their learning on their own and then go on to do it automatically without much thought. That can lead us to believe that some people just get it and some people don’t. However, that kind of fixed mindset thinking isn’t accurate. Regulation of learning can be learned and strengthened when people become aware of the principles and processes behind it and consciously reflect on how to do it better. In fact, when learners realize that the strategies they select are directly linked to how successful they are with tasks, they experience greater self-efficacy, motivation, and success on future tasks (Greene, 2018).

    Likewise, regulation of learning can be taught when we explicitly talk about it with our students, model it for them, and prompt them to engage in it before, during, and after learning activities in our classes. For example, a mathematics teacher might ask her students to fill out a task planning sheet before starting a task (Figure 2) to help them prepare for learning.

    An example of a student mathematics task planning sheet.
    Figure 2. Student task planning sheet by D. Dragnić-Cindrić and S. Hampton

    The purpose of engaging students in task planning is to get them to think about the task and their own goals for it, which might differ from the teacher’s goals. It connects the doing of the task to the time the teacher allotted for it and the materials students will need to use to get it done. Lastly, it leads the students to think about and plan the steps needed to complete the task beforehand. Of course, this plan is a starting point and should remain flexible as students work through the task. The idea behind scaffolds like this planning sheet is that they help students internalize and learn how to engage in self-regulated learning, and over time, begin to do it on their own in other classes.

    So regulation can be learned and taught, but should it? In our conversations on this topic, we relied on our combined expertise, Sarah, as a practitioner and teacher coach with 15 years of experience, and Dalila, as a learning scientist, who studies individual and group regulation of learning. During our conversation, we discussed current regulation of learning literature, Dalila’s own research findings, and Sarah’s deep knowledge of classroom contexts. We concluded that teaching regulation is so important because, immediately, it helps students see what they do in the classroom as something they do for themselves vs. something they do for the teacher, parents, school, etc., and, ultimately, it prepares students for success in any career path. The bottom line is that regulation empowers students and prepares them for life.

    Everyone is going to encounter a difficult moment, an exceptional challenge, and regulation is critical in that moment. Regulation of learning strategies are for everyone. Even if you don’t need them today, I promise you, a day will come when you’ll need them. – Dalila Dragnić-Cindrić

    In addition to the benefits for students, an upfront investment in teaching regulation of learning returns dividends for teachers, too. Imagine having a classroom full of students who are active and confident self-regulated learners rather than passive recipients of knowledge. Some of the time you currently spend motivating learners and managing your classroom could be repurposed for more personalized instruction as students begin diagnosing their own learning barriers and requesting specific kinds of help. Teaching regulation of learning alleviates teachers of the sole responsibility for ensuring students’ progress, while equipping students to assume more ownership of their learning success.

    Research supports what we intuitively know–helping students learn to be highly motivated, proactive, goal-directed, strategic, perseverant, adaptive learners is a game changer for them, and we can accomplish it by explicitly teaching and modeling regulation of learning skills. Because the benefits transcend subject areas and career paths, we would argue that teaching regulation is even more important than teaching subject specific content. Thankfully, teaching content and regulation of learning together is the best way to teach them both.

    In the coming blog posts within this series, we’ll explore some barriers and potential solutions for teaching regulation of learning in our classrooms. We’ll also discuss regulation of learning in collaborative groups (i.e., social regulation of learning) and hear more from Dalila about her research on this topic and from Sarah about her experiences with managing collaborative groups in her math and science classrooms. Together we will offer insights and recommendations for educators.

    Do you think it’s important to explicitly teach regulation of learning? Why or why not? If you’re already teaching it, let us know your favorite strategies by tweeting @EducatorCIRCLS!

    Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
    Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

    Resources

    Cleary, T. J. (2018). The self-regulated learning guide: Teaching students to think in the language of strategies. Routledge. https://doi.org/10.4324/9781315693378

    Greene, J. A. (2018). Self-regulation in education. Routledge. https://doi.org/10.4324/9781315537450

    Kirk, K. (n.d.) The cycle of self-regulated learning. [Figure]. The Supporting and Advancing Geoscience Education at Two-Year Colleges (SAGE 2YC) project website. Retrieved July 13, 2023, from https://serc.carleton.edu/sage2yc/self_regulated/index.html