All posts by Pati Ruiz

Glossary of Artificial Intelligence Terms for Educators

A blue robot looking windup toy with a metal coil going across its head. Red feet
Photo by Rock’n Roll Monkey on Unsplash
by Pati Ruiz and Judi Fusco

This glossary was written for educators to reference when learning about and using artificial intelligence (AI). We will start with a definition of artificial intelligence and then provide definitions of AI-related terms in alphabetical order. This glossary was last updated on March 31, 2024

Artificial Intelligence (AI): AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways, two of the primary ways are: (1) through the use of rules provided by a human (rule-based systems); or (2) with machine learning algorithms. Many newer AI systems use machine learning (see definition of machine learning below).

________________________________________________
Algorithm: Algorithms are the “brains” of an AI system and what determines decisions in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based where human programmers give the rules.

Artificial General Intelligence (AGI): Artificial general intelligence has not yet been realized and would be when an AI system can learn, understand, and solve any problem that a human can.

Artificial Narrow Intelligence (ANI): AI can solve narrow problems and this is called artificial narrow intelligence. For example, a smartphone can use facial recognition to identify photos of an individual in the Photos app, but that same system cannot identify sounds.

Generative AI (GenAI): A type of machine learning that generates content, currently such as text, images, music, videos, and can create 3D models from 2D input. See ChatGPT definition, ChatGPT is a specific example of GenAI.

Chat-based generative pre-trained transformer (ChatGPT) models: A system built with a neural network transformer type of AI model that works well in natural language processing tasks (see definitions for neural networks and Natural Language Processing below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

Transformer models: Used in GenAI (the T stands for Transformer), transformer models are a type of language model. They are neural networks and also classified as deep learning models. They give AI systems the ability to determine and focus on important parts of the input and output using something called a self-attention mechanism to help.

Self-attention mechanism: These mechanisms, also referred to as attention help systems determine the important aspects of input in different ways. There are several types and were inspired by how humans can direct their attention to important features in the world, understand ambiguity, and encode information.

Large language models (LLMs) Large language models form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools including OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s PaLM and Gemini. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent.

Computer Vision: Computer Vision is a set of computational challenges concerned with teaching computers how to understand visual information, including objects, pictures, scenes, and movement (including video). Computer Vision (often thought of as an AI problem) uses techniques like machine learning to achieve this goal.

Critical AI: Critical AI is an approach to examining AI from a perspective that focuses on reflective assessment and critique as a way of understanding and challenging existing and historical structures within AI. Read more about critical AI.

Data: Data are units of information about people or objects that can be used by AI technologies.

Training Data: This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)1​​

Foundation Models: Foundation Models represent a large amount of data that can be used as a foundation for developing other models. For example, generative AI systems use large language foundation models. They can be a way to speed up the development of new systems, but there is controversy about using foundation models since depending on where their data comes from, there are different issues of trustworthiness and bias. Jitendra Malik, Professor of Computer Science at UC Berkeley once said the following about foundation models: “These models are really castles in the air, they have no foundation whatsoever.

Human-centered Perspective: A human-centered perspective sees AI systems working with humans and helping to augment human skills. People should always play a leading role in education, and AI systems should not replace teachers.

Intelligence Augmentation (IA): Augmenting means making something greater; in some cases, perhaps it means making it possible to do the same task with less effort. Maybe it means letting a human (perhaps teacher) choose to not do all the redundant tasks in a classroom but automate some of them so they can do more things that only a human can do. It may mean other things. There’s a fine line between augmenting and replacing and technologies should be designed so that humans can choose what a system does and when it does it.

Intelligent Tutoring Systems (ITS): A computer system or digital learning environment that gives instant and custom feedback to students. An Intelligent Tutoring System may use rule-based AI (rules provided by a human) or use machine learning under the hood. By under the hood we mean the underlying algorithms and code that an ITS is built with. ITSs can support adaptive learning.

Adaptive Learning: Subject or course material is adjusted based on the performance of the learner. The difficulty of material, the pacing, sequence, type of help given, or other features can be adapted based on the learner’s prior responses.

Interpretable Machine Learning (IML): Interpretable machine learning, sometimes also called interpretable AI, describes the creation of models that are inherently interpretable in that they provide their own explanations for their decisions. This approach is preferable to that of explainable machine learning (see definition below) for many reasons including the fact that we should understand what is happening from the beginning in our systems, rather than try to “explain” black boxes after the fact.

Black Boxes: We call things we don’t understand, “black boxes” because what happens inside the box cannot be seen. Many machine learning algorithms are “black boxes” meaning that we don’t have an understanding of how a system is using features of the data when making their decisions (generally, we do know what features are used but not how they are used)There are currently two primary ways to pull back the curtain on the black boxes of AI algorithms: interpretable machine learning (see definition above) and explainable machine learning (see definition below).

Machine Learning (ML): Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

It’s important to note that in machine learning, the algorithm is doing the work to improve and does not have the help of a human programmer. It is also important to note three more things. One, in most cases the algorithm is learning an association (when X occurs, it usually means Y) from training data that is from the past. Two, since the data is historical, it may contain biases and assumptions that we do not want to perpetuate. Three, there are many questions about involving humans in the loop with AI systems; when using ML to solve AI problems, a human may not be able to understand the rules the algorithm is creating and using to make decisions. This could be especially problematic if a human learner was harmed by a decision a machine made and there was no way to appeal the decision.

Illustration of the topology of a generic Artificial Neural Network. A first column of three stacked circles labeled input pointing to a second column of four stacked circles labeled hidden pointing to a final column of two circles labeled output. All circles are the same size and a pale yellow color with a thin black outline.
Illustration of the topology of a generic Artificial Neural Network. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Neural Networks (NN): Neural networks also called artificial neural networks (ANN) and are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.

Deep Learning: Deep learning models are a subset of neural networks. With multiple hidden layers, deep learning algorithms are potentially able to recognize more subtle and complex patterns. Like neural networks, deep learning algorithms involve interconnected nodes where weights are adjusted, but as mentioned earlier there are more layers and more calculations that can make adjustments to the output to determine each decision. The decisions by deep learning models are often very difficult to interpret as there are so many hidden layers doing different calculations that are not easily translatable into English rules (or another human-readable language).

Natural Language Processing (NLP): Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

NLP technologies help in many situations that include: scanning texts to turn them into editable text (optical character recognition), speech to text, voice-based computer help systems, grammatical correction (like auto-correct or grammarly), summarizing texts, and others.

Robots: Robots are embodied mechanical machines that are capable of doing a physical task for humans. “Bots” are typically software agents that perform tasks in a software application (e.g., in an intelligent tutoring system they may offer help). Bots are sometimes called conversational agents. Both robots and bots can contain AI, including machine learning, but do not have to have it. AI can help robots and bots perform tasks in more adaptive and complex ways.

User Experience Design/User Interface Design (UX/UI): User-experience/user-interface design refers to the overall experience users have with a product. These approaches are not limited to AI work. Product designers implement UX/UI approaches to design and understand the experiences their users have with their technologies.

Explainable Machine Learning (XML) or Explainable AI (XAI): Researchers have developed a set of processes and methods that allow humans to better understand the results and outputs of machine learning algorithms. This helps developers of AI-mediated tools understand how the systems they design work and can help them ensure that they work correctly and are meeting requirements and regulatory standards.

It is important to note that the term “explainable” in the context of explainable machine learning or explainable AI, refers to an understanding of how a model works and not to an explanation of how the model works. In theory, explainable ML/AI means that an ML/AI model will be “explained” after the algorithm makes its decision so that we can understand how the model works. This often entails using another algorithm to help explain what is happening as the “black box.” One issue with XML and XAI is that we cannot know for certain whether the explanation we are getting is correct, therefore we cannot technically trust either the explanation or the original model. Instead, researchers recommend the use of interpretable models.

Thank you to Michael Chang, Ph.D., a CIRCLS postdoctoral scholar, for reviewing this post and to Eric Nentrup for support with specific definitions. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

________________________________________________
1​​​​Fusco, J. (2020). Book Review: You Look Like a Thing and I Love You. CIRCLEducators Blog. Retrieved from https://circleducators.org/review-you-look-like-a-thing/

Regulation of Learning: What is it, and why is it Important?

by Sarah Hampton and Dr. Dalila Dragnić-Cindrić

How many of us want our students to be highly motivated learners? Proactive? Goal-directed? Strategic? Perseverant? Adaptive? We’ve heard teachers across subjects and grade bands say that these are difference-making characteristics that students need to be successful in and out of the classroom. In educational research terms, students who demonstrate these qualities are skilled at regulating their learning. Researchers have dedicated significant efforts to understanding self-regulated learning skills and the underlying processes. In a series of three blog posts, we take a closer look at regulation of learning, why it matters to students and teachers, and how educators might foster it in classroom settings.

As Timothy Cleary describes in The Self-Regulated Learning Guide (2018), self-regulated learners “want to perform well on some activity” and “purposefully and strategically figure out ways to achieve their goals…despite experiencing challenges, barriers, or struggles, [they] continuously find a way to learn” (pp. 9-10). Interestingly, they do this by repeating three fairly simple phases (Figure 1):

  1. Before the Learning (Preparation Phase)–self-motivating and figuring out how to approach the task;
  2. During the Learning (Execution Phase)–maintaining motivation, using strategies to complete the task, and self-monitoring thinking and actions during learning;
  3. After the Learning (Reflection Phase)–determining how well the selected strategies helped accomplish the task and deciding how to improve next time.

Three phases of self-regulated learning cycle: preparation, execution, reflection.
Figure 1. Phases of self-regulated learning.

Note: This figure shows phases of self-regulated learning and steps students can take throughout this cyclical process. Adapted from the original figure The Cycle of Self-Regulated Learning by Karen Kirk from Develop Self-Regulated Learners: Choosing and Using the Best Strategies for the Task. Published under the Creative Commons license.

Self-regulated learning is cyclical and its phases are iterative and loosely sequenced; students might move from one phase to the next or revisit previous phases as needed. Thinking about strategies involves thinking about learning strategies (e.g., ignoring distractions, re-reading task instructions) as well as best content area strategies to use in a given task.

But what do we do when our students aren’t particularly skilled in regulating their learning? Can regulation be learned? Can regulation be taught? Should it be taught?

Some learners figure out how to regulate their learning on their own and then go on to do it automatically without much thought. That can lead us to believe that some people just get it and some people don’t. However, that kind of fixed mindset thinking isn’t accurate. Regulation of learning can be learned and strengthened when people become aware of the principles and processes behind it and consciously reflect on how to do it better. In fact, when learners realize that the strategies they select are directly linked to how successful they are with tasks, they experience greater self-efficacy, motivation, and success on future tasks (Greene, 2018).

Likewise, regulation of learning can be taught when we explicitly talk about it with our students, model it for them, and prompt them to engage in it before, during, and after learning activities in our classes. For example, a mathematics teacher might ask her students to fill out a task planning sheet before starting a task (Figure 2) to help them prepare for learning.

An example of a student mathematics task planning sheet.
Figure 2. Student task planning sheet by D. Dragnić-Cindrić and S. Hampton

The purpose of engaging students in task planning is to get them to think about the task and their own goals for it, which might differ from the teacher’s goals. It connects the doing of the task to the time the teacher allotted for it and the materials students will need to use to get it done. Lastly, it leads the students to think about and plan the steps needed to complete the task beforehand. Of course, this plan is a starting point and should remain flexible as students work through the task. The idea behind scaffolds like this planning sheet is that they help students internalize and learn how to engage in self-regulated learning, and over time, begin to do it on their own in other classes.

So regulation can be learned and taught, but should it? In our conversations on this topic, we relied on our combined expertise, Sarah, as a practitioner and teacher coach with 15 years of experience, and Dalila, as a learning scientist, who studies individual and group regulation of learning. During our conversation, we discussed current regulation of learning literature, Dalila’s own research findings, and Sarah’s deep knowledge of classroom contexts. We concluded that teaching regulation is so important because, immediately, it helps students see what they do in the classroom as something they do for themselves vs. something they do for the teacher, parents, school, etc., and, ultimately, it prepares students for success in any career path. The bottom line is that regulation empowers students and prepares them for life.

Everyone is going to encounter a difficult moment, an exceptional challenge, and regulation is critical in that moment. Regulation of learning strategies are for everyone. Even if you don’t need them today, I promise you, a day will come when you’ll need them. – Dalila Dragnić-Cindrić

In addition to the benefits for students, an upfront investment in teaching regulation of learning returns dividends for teachers, too. Imagine having a classroom full of students who are active and confident self-regulated learners rather than passive recipients of knowledge. Some of the time you currently spend motivating learners and managing your classroom could be repurposed for more personalized instruction as students begin diagnosing their own learning barriers and requesting specific kinds of help. Teaching regulation of learning alleviates teachers of the sole responsibility for ensuring students’ progress, while equipping students to assume more ownership of their learning success.

Research supports what we intuitively know–helping students learn to be highly motivated, proactive, goal-directed, strategic, perseverant, adaptive learners is a game changer for them, and we can accomplish it by explicitly teaching and modeling regulation of learning skills. Because the benefits transcend subject areas and career paths, we would argue that teaching regulation is even more important than teaching subject specific content. Thankfully, teaching content and regulation of learning together is the best way to teach them both.

In the coming blog posts within this series, we’ll explore some barriers and potential solutions for teaching regulation of learning in our classrooms. We’ll also discuss regulation of learning in collaborative groups (i.e., social regulation of learning) and hear more from Dalila about her research on this topic and from Sarah about her experiences with managing collaborative groups in her math and science classrooms. Together we will offer insights and recommendations for educators.

Do you think it’s important to explicitly teach regulation of learning? Why or why not? If you’re already teaching it, let us know your favorite strategies by tweeting @EducatorCIRCLS!

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Resources

Cleary, T. J. (2018). The self-regulated learning guide: Teaching students to think in the language of strategies. Routledge. https://doi.org/10.4324/9781315693378

Greene, J. A. (2018). Self-regulation in education. Routledge. https://doi.org/10.4324/9781315537450

Kirk, K. (n.d.) The cycle of self-regulated learning. [Figure]. The Supporting and Advancing Geoscience Education at Two-Year Colleges (SAGE 2YC) project website. Retrieved July 13, 2023, from https://serc.carleton.edu/sage2yc/self_regulated/index.html

Artificial Intelligence and Education: What We’re Up To

notebook, lightbulb, atom, pencils inside a digital circuit board imageby Pati Ruiz

I was recently asked for an overview of the AI and Education landscape and how we are participating in it. In addition to promoting equity and accountability in AI, here is a summary of our recent writing and research including key ideas from our work. We believe that AI systems should support and augment, but never replace, a human. To ensure this, emerging technology systems and tools should be developed with the input of educators, learners, and families. As always, please share your thoughts with us @EducatorCIRCLS.

Writing and Presentations

AI and the Future of Teaching and Learning | A blog series we partnered on with the U.S. Department of Education’s Office of Educational Technology

Key Ideas:

  • Educational technology is evolving to include artificial intelligence.
  • Artificial intelligence will bring “human-like” features and agency into future technologies.
  • Policy will have an important role in guiding the uses of artificial intelligence in education to realize benefits while limiting risks.
  • Artificial intelligence will enable students and teachers to interact with technology in human-like ways.
  • Individuals will find it difficult to make choices that balance benefits and risks.
  • Creating policies can strengthen how people make decisions about artificial intelligence in education.
  • Educational applications of many types will be artificial intelligence-enabled, including teaching and learning, guiding and advising, and administration and resource planning applications.
  • Use of artificial intelligence systems in school technology is presently light, allowing time for policy to have an impact on safety, equity, and effectiveness.
  • Policies should encourage teacher engagement, including the development of teachers’ trust, and their confidence to recommend not using untrustworthy artificial intelligence systems.
  • Policies should incorporate experiences for educators to shape and support their own professional learning about how to utilize artificial intelligence systems in teaching and learning.
  • Including and informing educators in design and development decisions will result in more useful and usable teacher supports.

AI or Intelligence Augmentation for Education? | Communications of the ACM 

Key Ideas:

  • We recommend a focus on intelligence augmentation (IA) in education that would put educators’ professional judgment and learners’ voice at the center of innovative designs and features.
  • An IA system might save an educator administrative time (for example, in grading papers) and support their attention to their students’ struggles and needs.
  • An IA system might help educators notice when a student is participating less and suggest strategies for engagement, perhaps even based on what worked to engage the student in a related classroom situation.
  • We hope that IA for education will focus attention on how human and computational intelligence could come together for the benefit of learners.

Artificial Intelligence 101: Covering the Basics for Educators | Digital Promise Blog

Key Ideas:

  • AI lets machines make decisions and predictions.
  • Teachers are essential to education, and AI should be used to better support them.
  • Technology often comes with ethical implications and AI is no different, educators should ask questions and investigate AI tools and systems before they adopt them into a classroom.

Teachers Partnering with Artificial Intelligence: Augmentation and Automation | Educator CIRCLS Blog 

Key Ideas:

  • Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue.
  • AI systems should support or augment, but never replace, a teacher.
    These systems should be developed with the input of teachers, students, and families.

Artificial Intelligence and Adaptivity to Strengthen Equity in Student Learning | Getting Smart

Key Ideas:

  • Educators, researchers, and developers prioritize adaptivity when it comes to emerging learning technologies.
  • Incorporating AI tools requires specific and precise inputs to generate useful outputs.
  • When practitioners, learners, researchers, and developers work together with shared values, more equitable learning is possible.

Ethical AI | EngageAI Nexus Blog 

Key Ideas:

  • Ethical considerations should be front and center throughout the development of any new AI innovation, and ethics should be central to our definition of success for AI.
  • Policies and guidelines from the government, accreditation requirements in education, and standards of professional ethics are all needed to reinforce ethics in AI.
  • Public education is also important so that end-users can make informed decisions based on a full understanding of key issues such as transparency and privacy.

Definitions | Glossary of Artificial Intelligence Terms for Educators: A glossary written for educators to reference when learning about and using artificial intelligence (AI).

Presentation | Insights on Artificial Intelligence and the Future of Teaching and Learning at the 2023 Consortium for School Network (CoSN) Conference.

Listening Sessions | AI and the Future of Learning: Listening Sessions | We supported the U.S. Department of Education’s Office of Educational Technology listening sessions about Artificial Intelligence (AI). We connected with teachers, educational leaders, students, parents, technologists, researchers, and policymakers to gather input, ideas, and engage in conversations that will help the Department shape a vision for AI policy that is inclusive of emerging research and practices while also informed by the opportunities and risks.

Ongoing Research

Emerging technology adoption framework: For PK-12 education | Educator CIRCLS Emerging Technology Advisory Board

Key Ideas:

  • A framework we co-developed with education community members to help ensure that educational leaders, technology specialists, teachers, students, and families are all part of the evaluation and adoption process for placing emerging technologies (including artificial intelligence and machine learning) in PK-12 classrooms.
  • We are currently working with League member Willy Haug, Director of Technology and Innovation to modify this framework for adoption at Menlo Park City School District.

Study | ChatGPT/GPT-4 for Developing Sample Computational Thinking Lesson Plans at North Salem School District

  • I am working with Dr. Julio Vazquez, Director of Instruction and Human Resources North Salem School District, who is working with his team to develop sample computational thinking lessons across all subject areas K-12 using ChatGPT. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process.

Artificial Intelligence Practitioner Advisory Board | A group that will explore the use of emerging technologies in classrooms, and how we might leverage technologies to better support educators and their students. We hope to foster a sense of community within the group where researchers and developers can learn along with you as we all go through the process of reviewing technologies and making recommendations on their use. This Practitioner Advisory Board is supported by two NSF projects:

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Developing Sample Computational Thinking Lessons with ChatGPT

by Pati Ruiz, Merijke Coenraad, and Judi Fusco with contributions from Julio Vazquez

What is ChatGPT?

Let’s start with some definitions, ChatGPT is commonly classified as a natural language processing model, meaning it deals with language and human speech patterns, and “generative artificial intelligence”, meaning that it is AI that creates new content — in this case, new text.

More specifically, ChatGPT is a chat-based generative pre-trained transformer. Meaning that the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer). Basically, it’s a chatbot that allows a user to ask a question in plain language and get a response in a way similar to how a human would reply.

What does this mean for education?

“ChatGPT is a good prompt for conversation.
I see this tool as a starting point for teachers and students.”
-Julio Vazquez, North Salem Central School District

Despite the precedent of banning access to ChatGPT set by New York City Public Schools in January 2023, not all school districts are following suit. Some educators believe that these AI systems and tools are out in the world and the best thing educators can do is to teach students to partner with AI tools so they can be better prepared for a technological world. For example, English teacher Cherie Shields was recently interviewed by the New York Times where she shared that she assigned students in one of her classes to use Chat GPT to create outlines for a recent essay assignment. She shared that the process helped deepen students’ understanding of the stories while also teaching them to interact with an AI system by manipulating their inputs to get the responses they were looking for. In this case, ChatGPT became a tool that can support learning when we thoughtfully include it in our lessons and also guide students in using it well.

Dr. Julio Vazquez, Director of Instruction and Human Resources, and his team are encouraging experimentation and access to ChatGPT for all faculty and staff and are thinking about how to provide students with access in a manner that will not conflict with student privacy laws. Staff members are rolling their sleeves up and starting to explore and learn about how they can use it with their students productively. In fact, they are exploring the use of ChatGPT to develop sample Computational Thinking (CT) lesson plans that the team uses as a jumping off point in their CT Pathways development process.

ChatGPT for Developing Sample Computational Thinking Lesson Plans

compass pointing north
North Salem Central School District
In a recent conversation with Dr. Vazquez, we asked him more about how he and his teachers are incorporating ChatGPT in their computational thinking lesson planning process.

Dr. Vazquez and his colleague Cynthia Sandler, Library Media Specialist, started by entering prompts into ChatGPT and seeing what came out. The searches started with prompt terms that went something like “generate a 5th grade lesson for computational thinking focusing on science.

As the team began to analyze the lesson plans that came out, they realized they needed to make adjustments. Julio shared that he and his team have become better at giving ChatGPT enough context so that the lessons that are developed are closer to what the team expects of a lesson plan and the content better aligns to both CT and content area standards. For example, a more recent lesson prompt terms included:

“write a science lesson that integrates
9-12.CT.1
Create a simple digital model that
makes predictions of outcomes. and HS-PS1-5. Apply scientific principles and evidence to explain how the rate of a physical or chemical change is
affected when conditions are varied.”

The prompt terms and outputs were documented and provided a good starting point for sparking conversation. On first pass, the team collectively agreed that they liked the structure of the generated lesson plans. Beyond format and in terms of the content of computational thinking and subject area standards, the prompt terms entered into ChatGPT also included Habits of Mind, thinking dispositions which are implemented in North Salem, as well as the use of Visible Thinking Routines.

Dr. Vazquez and his team have worked with ChatGPT to develop sample computational thinking lessons across all subject areas K-12. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process. We look forward to sharing more! Until then, do you have questions for us? Are you integrating ChatGPT in your classroom, school, or district? Let us know @EducatorCIRCLS.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Teachers Partnering with Artificial Intelligence: Augmentation and Automation

3x2 grid of colorful AI and learning related icons including students speaking in different languages, and an ipad with an award stamp

By Pati Ruiz and Judi Fusco

Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue. Our starting point is that AI systems should support or augment, but never replace, a teacher. In order to ensure this, these systems should be developed with the input of teachers, students, and families.

So, what types of AI systems do teachers want to see developed? A group of teachers from the Merlyn Mind Practitioner Advisory Board shared ideas for how AI might help teachers better support their students. One scenario emerged around students who have Individualized Education Programs or Plans (IEPs)1. In this post we will describe how an AI system might support teachers and students by automating:

  1. Planning and Supporting Preferences
  2. Monitoring
  3. Documentation

Planning and Supporting Preferences

First, a teacher could input student instructional plans into the system. Then, the system can review the plans, make recommendations, and send alerts to the teacher when something may not work for a student. In the alert, the system could provide suggestions of adaptations on lessons or assignments based on the needs of each student. For example, an AI system can scan what’s coming up in an instructional unit and alert the teacher that the website they selected does not meet the accessibility standards required by the students in the classroom. A more advanced system could also suggest an alternative option, or even better, search for multiple resources that are appropriate and let the teacher decide what resources are best suited for their students’ instructional needs. In all cases, the AI system is only helping and making suggestions that the teacher may act on.

An AI system can also allow for a broader range of inputs from students for assignments based on their needs. For example, if a student accommodation includes submitting assignments as recorded audio, but the teacher prefers written assignments, an AI system can convert the student’s audio to text so the teacher can review or grade the text. The text-to-speech tool should also allow the teacher to hear the student’s voice for a particular sentence or phrase, for example, if the translation was not successful. Alternatively, if a student needs to hear the teacher’s comments on their assignments instead of reading them, the AI system can convert the text comments into spoken text for the student to hear. To additionally help the teacher, the system might suggest comments that they had written for another student so the teacher can reuse or repurpose them. The system might also remind the teacher of a student’s preference for feedback and if the student prefers verbal feedback, the teacher could read and record the comments for that more personal touch.

Monitoring

To support teachers in providing adequate accommodations for their students, an AI system can monitor student IEP information and make automated recommendations for needed support. For example, the system could identify students who require extended time and either share a list with the teacher or make appropriate adjustments to due dates for individual students in a learning management system. Here, we point out the need for AI systems to be able to interact with other systems or be embedded within them. Additionally, the system must do this in a way that does not expose sensitive information about students to the whole class.

Related to the text-to-speech and speech-to-text ideas discussed above, an AI system can also provide individualized read-aloud capabilities for students who need that support. The system could also remind the teacher to provide tools, like headphones or closed captioning for students who need to listen to content. We firmly believe that AI systems can help by doing things that machines are good at, while continuing to enable teachers to focus on what humans do best—like developing interpersonal relationships and identifying nuanced needs. With these types of automated supports, it is important to ensure that teachers have the ability to make the final decisions about students’ needs and that students have the agency to accept and decline supports as they go.

Documentation

Supporting a classroom with students who have varying needs—whether they are documented in an IEP or not—requires a significant amount of monitoring and reporting on the part of educators. An AI system could support teachers by not only monitoring the individual requirements of students, but also documenting the adjustments and accommodations that were made for each student. This documentation could then be shared with the students’ families to provide a summary of the work that students have accomplished and how they have been supported in completing that work. Of course, a teacher would review and verify that the summary produced by the AI system is accurate and flag any issues with the write-ups that would need to be addressed by the AI design team.

By the end of the instructional unit, teachers would be able to review reports of student progress, identify what worked and what didn’t, and ensure that all students are making meaningful progress. Automating, planning, tracking, and documentation can give a teacher more time to care for students; however, given the various risks AI systems bring, it is crucial that teachers also have the capability to override an AI system when needed.

Risks

The imagined AI system described helps teachers do what they do best by supporting them to ensure their students receive the accommodations they require and then documents those accommodations. Using such systems will come with risks, and AI systems that engage with student IEP data need to have the highest level of data privacy and oversight. As we discussed earlier, educators must be involved—for example, the teacher is in charge of giving feedback, but the system may make suggestions that help the teacher give better feedback. If educator experts are not in the loop, there could be harmful consequences for students. Educators must be diligent and not assume that every accommodation determined by an AI system is correct or the best decision. AI systems lack full context and the ability to make human decisions. Educators must have oversight and be able to verify and approve every decision made by the system.

Educator Voices

This blog post presents an imagined AI system based on conversations with a group of practitioners from the Merlyn Mind Practitioner Advisory Board. We need more teachers and educators involved in these conversations, so please consider this blog post as an invitation to you to connect with us and join the conversation on the future of AI in Education. In addition to Merlyn Mind, if you are interested in getting involved, please visit the links below.

1 An IEP is a legal document in the United States that is developed for all public school children who need special education. It is created by district personnel with input from the child’s guardians and is reviewed every year. For more information see https://www2.ed.gov/about/offices/list/ocr/docs/edlite-FAPE504.html

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Enhancing Learning Performance With Microlearning

iPads used by students in school classrooms
image by Arthur Lambillotte via Unsplash
by Courtney Teague, Rita Fennelly-Atkinson, and Jillian Doggett

Courtney Teague, EdD, Deputy Director of Internal Professional Learning and Coaching with Verizon Innovative Learning Schools program based in Atlanta, GA.
Rita Fennelly-Atkinson,EdD, Director Micro-credentials with the Pathways and Credentials team based in Austin, TX
Jillian Doggett M.Ed, Project Director of Community Networks with Verizon Innovative Learning Schools program based in Columbus, OH


What is microlearning?
Microlearning is a teaching and learning approach that delivers educational content in short, focused bursts of information. Microlearning tends to focus on one objective, and the learning doesn’t require more than 1-20 minutes of the learner’s time. Schools and teachers can use microlearning to supplement traditional instruction or as a standalone learning tool. microlearning has been around for a long time–remember those flashcards at kindergarten that helped us learn numbers, the alphabet, and colors? However, schools were largely unaware of how powerful this learning strategy can be for a teacher.

Microlearning has several potential benefits for both learners and teachers. For learners, microlearning can provide a more engaging and interactive learning experience. This type of instruction can also help to reduce distractions for students who become disengaged with unnecessary learning information. For teachers, microlearning can be used to differentiate instruction and address the needs of all learners. Additionally, microlearning can save instructional time by allowing teachers to deliver targeted information in a concise format. Teachers can tailor microlearning content to focus on specific skills or knowledge gaps (Teague, 2021).

Microlearning is flexible and can be accessed anytime, anywhere. Learners can complete microlearning activities on their own time, at their own pace. Microlearning is like a seasoning for learning; it seasons and heats up information to make the process of comprehending new knowledge easier. It has been part-and-parcel in many schools’ instructional strategies since time immemorial, but only recently have we begun paying attention to how powerful this strategy really can be when used correctly.

microchip being held
image by Brian Kostiuk via Unsplash
What does microlearning look like?
Microlearning can come in many forms. Below is a list of 10 microlearning examples:

  1. Short, Focused Videos
  2. Infographics
  3. Podcasts or Audio Recordings
  4. Social Media Posts and Feeds
  5. Interactive Multimedia
  6. Animations
  7. Flashcards
  8. Virtual Simulations
  9. Assessment Activities: Polls, Multiple-Choice Questions, Open Response Questions
  10. Games

How can teachers use microlearning effectively to maximize content retention, personalize learning experiences, and bolster student engagement?

Use microlearning to Active Student Prior Knowledge and Generate Excitement for New Learning

Assign microlearning, such as a self-paced learning game, to assess and activate prior knowledge around a topic. Or place a few bite-sized learning opportunities about an upcoming lesson in your Learning Management System (LMS) for learners to preview beforehand to generate interest and excitement for new learning.

Use microlearning to Personalize Learning Experiences

Creating microlearning in various formats covering multiple topics gives learners the agency to make meaningful choices about their learning paths. For example, to learn a new concept or build new skills, learners can choose to engage with an interactive image, listen to a short audio guide, participate in a learning game, or watch an explainer video or animation. Additionally, learners who need remediation or want to extend their learning can quickly access content to review a topic again or complete additional microlearning lessons.

Use microlearning to Encourage Communication and Collaboration

Create different microlearning bites, each covering a specific objective or portion of a learning goal. Assign each student to engage with one microlearning bite and then use the Jigsaw method to have learners learn about a new topic in a cooperative style. Similarly, you can assign microlearning that includes thought-provoking, probing questions and have learners discuss on a discussion forum or by recording and responding to each other’s short video or audio responses.

Use microlearning to Engage Families and Caretakers

Distribute microlearning to learners’ families and caretakers to help them quickly learn content learners are learning in class to support them in taking an active role in their child’s learning at home.

Use microlearning to Reduce Time Spent Grading

Create microgames and assessments using tools that automatically grade and provide learner analytics to reduce the time spent grading. For example, create an interactive video with embedded questions, a short quiz on your LMS, or a learning game that automatically grades learners’ responses and provides you with learner analytics you can use right away to inform just-in-time teaching.

Use microlearning to Build Classroom Community

Have learners create microlearning lessons to teach each other about themselves, topics that interest them, or around specific learning objectives that they have mastered. Use these bite-sized pieces of learning to expand your microlearning repository, give learners ownership of their learning, and foster a sense of classroom community.

Use microlearning to Promote Learning Outside of School

Over time, create and curate a repository of microlearning assets, such as explainer videos, audio recordings, infographics, learning games, trivia quizzes, flashcards, etc., on your Learning Management System (LMS). Then, learners can easily access and continue their learning outside of school, cultivating a life-long learning mindset.

How to assess microlearning?
The flexibility of microlearning allows for an abundance of possibilities in how it is assessed. For example, if your goal is simply to educate people about a new process using a video, then you don’t have to assess, you can simply measure the reach by the number of views and effectiveness by the level of adherence to the new process by a specific date. If your goal is to educate people about the available services, then your performance indicator might be the use of those services. In other words, you have a license to be creative and to assess learning effectiveness in many different ways.

More formally, the evaluation of learning can be categorized into two types: assessments and indicators (Fennelly-Atkinson & Dyer, 2021). Assessments include most formal and informal methods of evaluating learning, which include surveys, check-ins (i.e. verbal, data, progress, etc), completion rates, knowledge checks, skill demonstration observations, self-evaluations, and performance evaluations. Meanwhile, indicators include indirect measures such as performance, productivity, and success benchmarks. Which type you use is largely dependent on the learning context and need. The key questions to consider are the following:

  • What measurable change is the microlearning impacting?
  • Do you need individual, organizational, or both types of data?
  • What is the ease of collecting and analyzing the data?
  • Can existing evaluations or indicators be used to measure the impact of learning?

What are the drawbacks of microlearning & how to mitigate them?
Microlearning does have some potential drawbacks. For one thing, it can be easy for learners to become overwhelmed by the sheer volume of micro-lessons that they are expected to complete. Additionally, microlearning can sometimes result in a fragmented understanding of a topic, as learners are only exposed to small pieces of information at a time. Microlearning often does not provide an opportunity for learners to practice and apply what they have learned. However, these potential drawbacks can be avoided or mitigated when microlearning is designed into learning activities. Another potential drawback of microlearning is that it can be difficult to maintain a consistent level of quality control. With so much content being produced by so many different people, it can be hard to ensure that all of the material is accurate and up to date. This problem can be mitigated by careful selection of materials and regular quality checks. Because of this, microlearning can create a significant amount of work for teachers. In order to properly incorporate microlearning into their classrooms, teachers need to have a good understanding of the material and be able to effectively facilitate discussion and debate. While it may require some additional effort on the part of teachers to do microlearning, it feels worth it as it has the potential to significantly improve student engagement and learning outcomes.

Which tools can you use to create microlearning?
While microlearning does not necessarily require the use of digital tools, the reach and potential of these types of learning experiences is magnified by technology. Because microlearning is so short and usually discrete, there are many types of tools and methods of delivery that can be used. Formal authoring tools such as LMSs and Articulate can be used, but are not required. Any type of tool that can create a static or dynamic piece of content can be used. Further, any type of delivery system can be used to disseminate the learning. Making microlearning relevant and specific to the learning context, environment, and audience are key to selecting a content creation tool and delivery systems (Fennelly-Atkinson & Dyer, 2021).

Summary
To wrap it up, microlearning is breaking down and chunking learning into bite-sized pieces. Microlearning might be small but can have a big impact on powerful teaching and learning. It can take many different forms, which means that there are just as many content-creation tools and delivery platforms. Likewise, there are a variety of ways to assess microlearning depending on the goal and purpose for its use. There is no one correct way of creating microlearning. Microlearning can be as simple as listening to the pronunciation of words on an audible dictionary online application. Teachers can use this flexible method of microlearning to support research-based instructional practices and personalize learning experiences.

So how might you use this approach to meet the modern learner’s needs? Tweet @EducatorCIRCLS and be part of the conversation.

References

Fennelly-Atkinson, R., & Dyer, R. (2021). Assessing the Learning in microlearning. In Microlearning in the Digital Age (pp. 95-107). Routledge.

Teague, C. (2021, January 11). It’s All About microlearning. https://community.simplek12.com/webinar/5673

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Apprentice Learner: Artificial Intelligence (AI) in the Classroom

by Sarah Hampton

One of my favorite things about CIRCLS is the opportunity to collaborate with education researchers and technology developers. Our goal as a community is to innovate education using technology and the learning sciences to give more learners engaging educational experiences to help them gain deep understanding. To reach that goal, we need expertise from many areas: researchers who study how we learn best, teachers who understand how new technologies can be integrated, and developers who turn ideas into hardware or software.

Recently I’ve been reminded of an opportunity when Judi, Pati, and I meet with Daniel Weitekemp in June of 2020. Daniel, a PhD student at Carnegie Mellon University at the time, was developing an AI tool for teachers called Apprentice Learner.

A stacked addition problem demonstrating carrying. "839 + 445 = 1284" Space below the problem displays "hint," "done," "previous," and "next" buttons.
Figure 1. Apprentice Learner Interface that students use when interacting with Apprentice Learner. The user can request a hint or type in their answer and then hit done.

Apprentice Learner looks a bit like a calculator at first glance, so an onlooker might be tempted to say, “What’s so cutting edge about this? We’ve been able to do basic math on calculators for years.” But we need to understand the difference between traditional software and software using artificial intelligence (AI) to appreciate the new benefits this kind of tool can bring to the education table.

In a basic calculator, there’s an unchanging program that tells the screen to display “1284” when you type in “839+445.” There’s no explanation given for how and why the programming behind a calculator works. Yet, for each math problem someone could type in a calculator, there is an answer that has been explicitly programmed to be displayed on the screen.

Contrast a calculator to Apprentice Learner, which uses machine learning (a type of artificial intelligence). No one tells Apprentice Learner to display “1284” when it sees 839+445.” Instead, it has some basic explicit instructions and is given lots of examples of correctly solved problems adding 2 or more columns of numbers. Then it has to figure out how to answer new questions. The examples it is given are called training data. In this case, Apprentice Learner was given explicit instructions about adding single digit numbers and then lots of training data–multidigit addition problems with their answers–maybe problems like “21+43=64,” “49+8=57,” and “234+1767=2001.” Then, it starts guessing at ways to arrive at the answers given from the training data.

The first guess might be to stack the numbers and add each column from left to right. That works perfectly for “21+43,” but gives an incorrect answer of “129” for “49+8.”

Two guesses. First: “21 correctly stacked over 43 which equals 64. The next stacked equation gives an incorrect answer of “129” for “49+8” stack because the computer calculates 4+8 in the first column and brings the 9 in the second column down. Second: “21 correctly stacked over 43 which equals 64. The next stacked equation gives an incorrect answer of “417” for “49+8” stack because the computer calculates 4 for column 1 and 17 for column 2 and puts them together.

The second guess might be to stack the numbers and add each column from right to left. Again, that works perfectly for “21+43.” Unfortunately, that would give an answer of “417” for “49+8.”

The software continues finding patterns and trying out models until it finds one that fits the training data best. You can see below that, eventually, Apprentice Learner “figured out” how to regroup (aka carry) so it could arrive at the correct answer.

A stacked addition problem demonstrating carrying. "839 + 445 = 1284" Space below the problem displays "hint," "done," "previous," and "next" buttons.

So what are the implications for something like this in education? Here are a few of my thoughts:

Apprentice Learner models inductive learning which can help pre-service teachers.

Induction is the process of establishing a general law by observing multiple specific examples. It’s the basic principle machine learning uses. In addition, inductive reasoning tasks such as identifying similarities and differences, pattern recognition, generalization, and hypothesis generation play important roles when learning mathematics. (See Haverty, Koedinger, Klahr, Alibali). Multiple studies have shown that greater learning occurs when students induce mathematical principles themselves first rather than having the principles directly explained at the onset. (See Zhu and Simon, Klauer, and Koedinger and Anderson)

However, instructional strategies that promote students to reason inductively prior to direct instruction can be difficult for math teachers to implement if they haven’t experienced learning math this way themselves. Based on conversations with multiple math teacher colleagues throughout the years, most of us learned math in more direct manners i.e., the teacher shows and explains the procedure first and then the learner imitates it with practice problems. (Note: even this language communicates that there is “one right way” to do math unlike induction in which all procedures are evaluated for usefulness. This could be a post in its own right.)

Apprentice Learner could provide a low-stakes experience to encourage early-career teachers to think through math solutions inductively. Helping teachers recognize and honor multiple student pathways to a solution empowers students, helps foster critical thinking, and increases long-term retention. (See Atta, Ayaz, and Nawaz and Pokharel) This could also help teachers preempt student misconceptions (like column misalignment caused by a misunderstanding of place values and digits) and be ready with counterexamples to show why those misconceptions won’t work for every instance, much like I demonstrated above with Apprentice Learner’s possible first and second guess at how multi-digit addition works. Ken Koedinger, professor of human-computer interaction and psychology at CMU put it like this, “The machine learning system often stumbles in the same places that students do. As you’re teaching the computer, we can imagine a teacher may get new insights about what’s hard to learn because the machine has trouble learning it.”

The right training data is crucial.
What would have happened if there were only some types of problems in the training data? What if they were all two digit numbers? Then it wouldn’t have mattered if you stacked them left to right or right to left. What if none required regrouping/carrying? Then adding right to left is a perfectly acceptable way to add in every instance. But when all the edge cases are included, the model is more accurate and robust.

Making sure the training data has enough data and a wide array of data to cover all the edge cases is crucial to the success of any AI model. Consider what has already happened when insufficient training data was used for facial recognition software. “A growing body of research exposes divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, Black, and 18-30 years old.” Some of the most historically excluded people were most at risk for negative consequences of the AI failing. What’s important for us as educators? We need to ask questions about things like training data before using AI tools, and do our best to protect all students from negative consequences of software.

Feedback is incredibly advantageous.

A flowchart demonstrating different ways a user can give direct input to Apprentice Learner such as demonstrating the next step, specifying if the highlighted input is correct, etc.
Figure 2. Diagram of how it works to give feedback to the Apprentice Learner system.

One of the most interesting things about Apprentice Learner is how it incorporates human feedback while it develops models. Instead of letting the AI run its course after the initial programming, it’s designed for human interaction throughout the process. The developers’ novel approach allows Apprentice Learner to be up and running in about a fourth of the time compared to similar systems. That’s a significant difference! (You can read about their approach in the Association for Computing Machinery’s Digital Library.)

It’s no surprise that feedback helps the system learn, in fact, there’s a parallel between helping the software learn and helping students learn. Feedback is one of the most effective instructional strategies in our teacher toolkit. As I highlighted in a former post, feedback had an average effect size of 0.79 standard deviation – an effect greater than students’ prior cognitive ability, socioeconomic background, and reduced class size on students’ performance. I’ve seen firsthand how quickly students can learn when they’re given clear individualized feedback exactly when they need it. I wasn’t surprised to see that human intervention could do the same for the software.

I really enjoyed our conversation with Daniel. It was interesting to hear our different perspectives around the same tool. (Judi is a research scientist, Pati is a former teacher and current research scientist, Daniel is a developer, and I am a classroom teacher.) I could see how this type of collaboration during the research and development of tools could amplify their impacts in classrooms. We always want to hear from more classroom teachers! Tweet @EducatorCIRCLS and be part of the conversation.

Thank you for your time in talking and reviewing this post, Daniel Weitekamp, PhD Candidate, Carnegie Mellon University.

Learn More about Apprentice Learner:

Learn More about Math Teaching and Learning:

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

How Can AI Systems Support Teachers: 5 Big Ideas from the Learning Sciences

This post was originally published on the Digital Promise website.

By Pati Ruiz and Judi Fusco

The learning sciences study the design and implementation of effective learning environments by drawing on a variety of perspectives across a range of physical, social, and technological spaces1. Learning sciences focuses on human learning and helps individuals achieve their fullest potential and attain 21st-century skills. Because of this focus, the learning sciences should be foundational in the design and development of emerging technologies for teaching and learning. AI systems are an emerging technology that are starting to play a significant role in the redesign of learning environments. To increase our chances of creating successful AI systems for learning, they should be grounded in the learning sciences. We’ll discuss five big ideas from the learning sciences in relation to the design of AI systems: Representation and Supports; Collaboration; How Students Think; Building on Students’ Cultural and Linguistic Assets; Assessment and Feedback. We propose that these big ideas are important as a starting point in the design of better AI systems.

Big Idea 1: Representation and Supports

The learning sciences have found that enabling students to make connections across multiple representations (for example, graphs, writing, images, maps, blocks, etc.) contributes to knowledge construction. Different forms of representation give them a way to make sense of concepts in the best way that helps them construct their knowledge. How can this big idea be used in designing an AI system?

In a classroom where a teacher is fielding questions from students about a specific species of frog, an AI system can support the teacher by quickly searching for and projecting a set of visual representations of the frog that are appropriate for the students and have high-quality information for the teacher and students. When teaching about a metabolic function, an animation might help, and the AI system could share the animation and also point to text or other representations that may help students make connections to understand the process. By giving students and teachers just-in-time support like a relevant animation or engaging follow-up questions, AI systems can support teachers to orchestrate learning experiences by automating tasks (as described above) so teachers can spend more time focused on students. Beyond those types of just-in-time supports, AI systems can further support the engagement of all students in sustained creative work—something that has been a challenging problem in the design of learning environments.

Big Idea 2: Collaboration

The learning sciences have found that learning environments should be designed to foster collaboration and help learners work together to share and make sense of challenging problems. Research points us toward more social and collaborative learning environments.

AI systems could support this big idea by making recommendations for how teachers group students or by giving students themselves prompts that may lead to shared understanding when working in groups without the teacher. Emerging AI technologies might help teachers ask different groups the right guiding questions as the AI system “listens” to what each group is discussing. An AI system that asks questions might also be able to assess the answers to those questions, help students arrive at the same conceptual understanding, and determine when the group is ready for a new task.

Big Idea 3: How Students Think

The learning sciences have found that learning environments should be not only collaborative, but also foster adaptive or personalized learning because there is not a single way to learn and individuals have unique needs when it comes to learning environment designs.

AI systems might support teachers in facilitating this big idea by finding instances of student reasoning for teachers to review based on the analysis of video, audio, or student work. AI systems can also quickly provide insights to teachers about what learning path a student is taking and analytics could help teachers understand how each of their students tends to learn a concept based on their writing, speaking, or movements. A teacher might take the feedback given by an AI system and follow-up with students about their individual learning process and make decisions with them about what to do next. By helping students keep track of how they are practicing and providing scaffolds when they are needed and removing them when a student is ready, an AI system can support students’ unique learning needs.

Big Idea 4: Building on Students’ Cultural and Linguistic Assets

The learning sciences have found that learning and teaching are cultural processes and that we best support learning when we include students’ cultural and linguistic backgrounds as pedagogical assets. This big idea means that AI systems need to support learning environments that enable teachers and learners to address the multiple elements of learning, including identity and culture. To do this, developers need to restructure the assumptions that are made about learners and what they know by keeping both teachers and learners in the loop. For example, AI systems can help personalize the materials for Spanish-speaking students and their parents by translating sections of text, or by providing just-in-time translations so that they can more fully participate in learning experiences. Another personalization could be an AI system where the agent speaks to students and engages them using speech patterns similar to those of the student.

Big Idea 5: Assessment and Feedback

There’s been a lot of discussion around how AI systems can support teachers and students with new types of assessment, such as more stealth assessment, or formative assessment. Formative assessment provides specific information to a teacher about students’ strengths and challenges to help a teacher adapt their instruction to meet students’ needs. Students’ needs vary and they can also be challenging to anticipate—this is where AI systems can support teachers and readers. We won’t get into assessment and feedback more here, but check out this Educator CIRCLS webinar on Assessment or read this post on AI and Formative Assessment to learn more about this big idea.

Looking ahead

These big ideas from the learning sciences should be incorporated into AI systems to create better, user-centered products. In addition, educators need to be involved in the process because they have valuable insights about what is working and not working in ways that complement researchers’ expertise. Merlyn Mind Practitioner Advisory Board member, Grace Magley reminds us that teachers “have to see real benefits, not just new tech” and “they need to be shown how it would work in a classroom full of diverse learners. They need to see benefits for the teacher as well as the learners.”

This blog post is an invitation to you to connect with us and join the conversation on the future of AI in Education. If you are interested in getting involved, please visit:

Educator CIRCLS
AI CIRCLS
1Sawyer, R. (Ed.). (2014). The Cambridge Handbook of the Learning Sciences (2nd ed., Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139519526

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

I’m a Teacher, Will Artificial Intelligence Help Me?

Robot caricature in a yellow circle thinks of 0's and 1's, a teacher in a red heart thinks of people
by Judi Fusco and Pati Ruiz

Artificial Intelligence (AI) systems are becoming more prevalent everywhere including education. Educators often seem to wonder, “What is it?” and, “What can it do?” Let’s address these questions and then discuss why and how YOU should be involved!

What is it and what can it do for teachers?

Artificial intelligence (AI) is a field of computer science that lets machines make decisions and predictions. The goal of AI is to create machines that can mimic human capabilities. To do this, AI systems use many different techniques. You are probably using AI systems every day because they are embedded in our mobile phones and cars and include things like face recognition to unlock your phone, digital voice assistants, and mapping/route recommendations. We’re not going to go into the details of how AI works in this post, but you can read a prior post on AI and check out this glossary of AI terms that might be helpful if you want more background on the topic. In this post, we will focus on examples of AI systems that can help teachers.

Teachers have to do countless tasks, such as lesson planning, teaching, grading mentoring, classroom management, keeping up with technology in the classroom and new pedagogical practices, monitoring progress, and administrative work, all while keeping students’ social and emotional needs in mind. While AI has come a long way since the 1950s when the term was coined and work on Intelligent Tutoring Systems began, it cannot replace a teacher in the classroom. We will share examples of how existing AI systems have successfully helped teachers and reduced their load.

Example: Personalized online math learning software for middle and high school students

Mathia provides coaching to students as they solve math problems and gives teachers a detailed picture of where each student is, as well as suggestions for conversation starters to talk about each student’s understanding. This support allows teachers to spend more time with students focused on learning, while also directly giving the students additional, useful feedback as they solve math problems.

Example: A platform that provides immediate feedback to students and assessment data to teachers

Another AI system that supports both teachers and students is ASSISTments. It is also currently focused on math. For students, it gives assistance in the form of hints and instant feedback while they do math homework. For teachers, it gives information about which homework problems were difficult and what the most common wrong answers were. This can prompt teachers to spend time discussing the problems that students need the most help on, and teachers can be sure to re-teach concepts based on common wrong answers.

In addition to teaching content, when you think about all the things a teacher does in managing their classroom and all the “plates” they must juggle to keep 25, 30, or more students on task, engaged, and learning, you can imagine they could use some support. These next three systems described primarily support teachers.

Example: A digital assistant for teachers

One AI system that helps with classroom management tasks is a multimodal digital assistant specifically developed for teachers with privacy in mind, called Merlyn. Merlyn looks like a small speaker, but does so much more. It allows teachers to use voice and a remote control to control content from a distance. For example, with Merlyn teachers can set timers and switch displays between their laptop, document camera, and interactive whiteboard. Teachers can control a web browser on their laptop and do things like share a presentation, go to a specific point in a video, show a website, or search. This frees them up to walk around the classroom and interact with students more easily.

Other ways AI systems can support teaching and learning

The examples above show three categories of how AI systems have helped teachers and their students. Three more examples include, an AI system that can analyze the conversation from a classroom session and identify the amount that a teacher talked versus a student (i.e. TeachFX). This tool also identifies whether teachers let students build on each other’s thoughts leading to discussions. With the help of this AI system, teachers can work to engage their students in discussions and reflect on their practice.

Grading is another task that is very important but very time consuming. Gradescope, for example, supports instructors in grading their existing paper-based and digital assignments in less time than it normally takes them. It does this by scanning text and sorting similar responses together for the teacher to grade some of each type, the system then “learns” from the teacher, automatically grades the rest, and sends the grading to the teacher for review.

Finally, AI systems that are specialized within a subject matter can allow teachers to set up content-specific learning experiences. For example in the domain of science, Inq-ITS, allows teachers to select digital labs for their middle school students. When completing the assigned digital labs, students learn by doing. Inq-ITS autoscores the labs in real-time and shows the teacher performance updates for each student. A teacher can use the reports to provide the appropriate support to students who need additional help. Inq-ITS also supports students with hints while performing the labs.

Educators Must be Involved in the Design of AI Systems

The AI systems described above, support or augment, but never replace a teacher. We believe that AI systems can help by doing things that machines are good at while having teachers do the things that humans do best.

The AI systems above are also designed by teams that have made education and learning environments the main audience for their systems. They have also included teachers in their design process. There are other AI tools that exist and even more that are being developed to support teachers and students on other activities and tasks, but some don’t have the same focus on education. We think that it’s important that in the design of AI systems for classrooms, educators – the end-users – need to be involved in the design.

Some of the teams that design AI systems for education haven’t been in a classroom recently and when they were they probably weren’t the teacher. To make a technology that works in classrooms requires classroom experts (the main users) to be part of the design process and not an afterthought. When teachers give feedback, they help ensure 1) that systems work in ways that make sense for classrooms in general, and 2) that systems would work well in their specific classroom situations. (We’ll discuss why this is the case in another future blog post.)

A final, yet very important reason for educators to be involved, is that while AI systems can bring opportunities to support teaching and learning, there are also privacy, ethics, equity, and bias issues to be aware of. We don’t want to add anything to your already full plate, but as technologies come into your classroom, you should ask questions about how the system supports students, if the systems were designed for students like your students, what the privacy policies are, and any implications that might affect your students.

We understand that most teachers don’t have a single extra minute but it is crucial to have current teachers in the design process. If you want to learn and think about AI systems, as they become more prevalent, you will become an even more invaluable teacher or technology leader in your school/district. Your voice is important and getting more educators involved makes a more powerful collective voice.

Looking ahead

If you’re still reading this blog, you probably have an interest in AI systems; below we suggest a few places to connect. Teachers are critical to the design of effective AI technologies for schools and classrooms. We hope this post has given you some insights into how AI systems might support you and your students. If you are interested in getting involved, we have some links for you below. Consider this blog post an invitation to you to connect with us and join the conversation; we hope you’ll join us in thinking about the future of AI in Education.

In our next post we will discuss how AI systems informed by learning science principles may help solve problems in learning environments.

Let us know your thoughts @educatorCIRCLS.

Ways to join:
Educator CIRCLS
AI CIRCLS
Join the ASSISTments Teacher Community
Leadership Programs — TeachFX

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Book Review: You Look Like a Thing and I Love You

This post was originally published on CIRCLEducators.org in October, 2020.

by Judi Fusco
Box with eyes, arms and feet holds a martini glass and an outline of a heart. Text reads: And I Love YouDuring CIRCL Educators’ Summer of Artificial Intelligence (AI), I read the book You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place1, by Dr. Janelle Shane. I got the recommendation for it from fellow CIRCL Educator, Angie Kalthoff.

I found the book helpful even though it is not about AI in education. I read and enjoyed the e-book and the audio version. As I started writing this review, I was driving somewhere with one of my teenagers and I asked if we could listen to the book. She rolled her eyes but was soon laughing out loud as we listened. I think that’s a great testament to how accessible the book is.

Teaching an AI

Many of us use AI products like Siri or Alexa, on a regular basis. But how did they get “smart?” In the book, Dr. Shane writes about the process of training machine learning2, systems to be “intelligent”. She tells us how they certainly don’t start smart. Reading about the foibles, flailings, and failings that she has witnessed in her work helped me understand why it is so important to get the training part right and helped me understand some of what needs to be considered as new products are developed.

Dr. Shane starts out comparing machine learning and rule-based AI systems, which are two very different types of AI systems. Briefly, a rule-based system uses rules written by human programmers as it works with data to make decisions. By contrast, a machine learning algorithm3 is not given rules. Instead, humans pick an algorithm, give a goal (maybe to make a prediction or decision), give example data that helps the algorithm learn4, and then the algorithm has to figure out how to achieve that goal. Depending on the algorithm, they will discover their own rules (for some this means adjusting weights on connections between what is input and what they output). From the example data given to the algorithm, it “learns” or rather the algorithm improves what it produces through its experience with that data. It’s important to note that the algorithm is doing the work to improve and not a human programmer. In the book, Dr. Shane explains that after she sets up the algorithm with a goal and gives it training data she goes to get coffee and lets it work.

Strengths and Weaknesses

There are strengths and weaknesses in the machine learning approach. A strength is that as the algorithm tries to reach its goal, it can detect relationships and features of details that the programmer may not have thought would be important, or that the programmer may not even have been aware of. This can either be good or bad.

One way it can be good or positive is that sometimes an AI tries a novel solution because it isn’t bogged down with knowledge constraints of rules in the world. However, not knowing about constraints in the world can simultaneously be bad and lead to impossible ideas. For example, in the book, Dr. Shane discusses how in simulated worlds, an AI will try things that won’t work in our world because it doesn’t understand the laws of physics. To help the AI, a human programmer needs to specify what is impossible or not. Also, an AI will take shortcuts that may lead to the goal, but may not be fair. One time, an AI created a solution that took advantage of a situation. While it was playing a game, the AI discovered there wasn’t enough RAM in the computer of its opponent for a specific move. The AI would make that move and cause the other computer to run out of RAM and then crash. The AI would then win every time. Dr. Shane discusses many other instances where an AI exploits a weakness to look like it’s smart.

In addition, one other problem we have learned from machine learning work, is that it highlights and exacerbates problems that it learns from training data. For example, much training data comes from the internet. Much of the data on the internet is full of bias. When biased data are used to train an AI, the biases and problems in the data become what guide the AI toward its goal. Because of this, our biases, found on the internet, become perpetuated in the decisions the machine learning algorithms make. (Read about some of the unfair and biased decisions that have occurred when AI was used to make decisions about defendants in the justice system.)

Bias

People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)

In the book, Dr. Shane says, machine learning occurs in the AI algorithms by “copying humans” — the algorithms don’t find the “best solution” or an unbiased one, they are seeking a way to do “what the humans would have done” (p 24) in the past because of the data they use for training. What do you think would happen if an AI were screening job candidates based on how companies typically hired in the past? (Spoiler alert: hiring practices do not become less discriminatory and the algorithms perpetuate and extend biased hiring.)

A related problem comes about because machine learning AIs make their own rules. These rules are not explicitly stated in some machine learning algorithms so we (humans aka the creators and the users) don’t always know what an AI is doing. There are calls for machine learning to write out the rules it creates so that humans can understand them, but this is a very hard problem and it won’t be easy to fix. (In addition, some algorithms are proprietary and companies won’t let us know what is happening.)

Integrating AIs into our lives

It feels necessary to know how a machine is making decisions when it is tasked with making decisions about people’s lives (e.g., prison release, hiring, and job performance). We should not blindly trust how AIs make decisions. AIs have no idea of the consequences of its decisions. We can still use them to help us with our work, but we should be very cautious about the types of problems we automate. We also need to ensure that the AI makes it clear what they are doing so that humans can review the automation, how humans can override decisions, and the consequences of an incorrect decision by an AI. Dr. Shane reminds us that an “AI can’t be bribed but it also can’t raise moral objections to anything it’s asked to do” (p. 4).

In addition, we need to ensure the data we use for training are as representative as possible to avoid bias, make sure that the system can’t take shortcuts to meet its goal, and we need to make sure the systems work with a lot of different types of populations (e.g., gender, racial, people with learning differences). AIso, an AI is not as smart as a human, in fact, Dr. Shane shares that most AI systems using machine learning (in 2019) have the approximate brainpower of a worm. Machine learning can help us automate tasks, but we still have a lot of work to do to ensure that AIs don’t harm or damage people.

What are your thoughts or questions on machine learning or other types of AI in education? Tweet to @CIRCLEducators and be part of the conversation.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

See a recent TED Talk by author Janelle Shane.


Notes:

  1. Read the book to find out what the title means!
  2. Machine learning is one of several AI approaches.
  3. Machine Learning is a general term that also includes neural networks and the more specialized neural network class of Deep Learning. Note also, a famous class of ML algorithms that use rules are decision-tree algorithms.
  4. Some algorithms “learn” with labeled examples and some without, but that’s a discussion beyond the scope of this post.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]