Category Archives: Written by: Pati Ruiz

Glossary of Artificial Intelligence Terms for Educators

A blue robot looking windup toy with a metal coil going across its head. Red feet
Photo by Rock’n Roll Monkey on Unsplash
by Pati Ruiz and Judi Fusco

This glossary was written for educators to reference when learning about and using artificial intelligence (AI). We will start with a definition of artificial intelligence and then provide definitions of AI-related terms in alphabetical order. This glossary was last updated on March 31, 2024

Artificial Intelligence (AI): AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways, two of the primary ways are: (1) through the use of rules provided by a human (rule-based systems); or (2) with machine learning algorithms. Many newer AI systems use machine learning (see definition of machine learning below).

________________________________________________
Algorithm: Algorithms are the “brains” of an AI system and what determines decisions in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based where human programmers give the rules.

Artificial General Intelligence (AGI): Artificial general intelligence has not yet been realized and would be when an AI system can learn, understand, and solve any problem that a human can.

Artificial Narrow Intelligence (ANI): AI can solve narrow problems and this is called artificial narrow intelligence. For example, a smartphone can use facial recognition to identify photos of an individual in the Photos app, but that same system cannot identify sounds.

Generative AI (GenAI): A type of machine learning that generates content, currently such as text, images, music, videos, and can create 3D models from 2D input. See ChatGPT definition, ChatGPT is a specific example of GenAI.

Chat-based generative pre-trained transformer (ChatGPT) models: A system built with a neural network transformer type of AI model that works well in natural language processing tasks (see definitions for neural networks and Natural Language Processing below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

Transformer models: Used in GenAI (the T stands for Transformer), transformer models are a type of language model. They are neural networks and also classified as deep learning models. They give AI systems the ability to determine and focus on important parts of the input and output using something called a self-attention mechanism to help.

Self-attention mechanism: These mechanisms, also referred to as attention help systems determine the important aspects of input in different ways. There are several types and were inspired by how humans can direct their attention to important features in the world, understand ambiguity, and encode information.

Large language models (LLMs) Large language models form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools including OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s PaLM and Gemini. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent.

Computer Vision: Computer Vision is a set of computational challenges concerned with teaching computers how to understand visual information, including objects, pictures, scenes, and movement (including video). Computer Vision (often thought of as an AI problem) uses techniques like machine learning to achieve this goal.

Critical AI: Critical AI is an approach to examining AI from a perspective that focuses on reflective assessment and critique as a way of understanding and challenging existing and historical structures within AI. Read more about critical AI.

Data: Data are units of information about people or objects that can be used by AI technologies.

Training Data: This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)1​​

Foundation Models: Foundation Models represent a large amount of data that can be used as a foundation for developing other models. For example, generative AI systems use large language foundation models. They can be a way to speed up the development of new systems, but there is controversy about using foundation models since depending on where their data comes from, there are different issues of trustworthiness and bias. Jitendra Malik, Professor of Computer Science at UC Berkeley once said the following about foundation models: “These models are really castles in the air, they have no foundation whatsoever.

Human-centered Perspective: A human-centered perspective sees AI systems working with humans and helping to augment human skills. People should always play a leading role in education, and AI systems should not replace teachers.

Intelligence Augmentation (IA): Augmenting means making something greater; in some cases, perhaps it means making it possible to do the same task with less effort. Maybe it means letting a human (perhaps teacher) choose to not do all the redundant tasks in a classroom but automate some of them so they can do more things that only a human can do. It may mean other things. There’s a fine line between augmenting and replacing and technologies should be designed so that humans can choose what a system does and when it does it.

Intelligent Tutoring Systems (ITS): A computer system or digital learning environment that gives instant and custom feedback to students. An Intelligent Tutoring System may use rule-based AI (rules provided by a human) or use machine learning under the hood. By under the hood we mean the underlying algorithms and code that an ITS is built with. ITSs can support adaptive learning.

Adaptive Learning: Subject or course material is adjusted based on the performance of the learner. The difficulty of material, the pacing, sequence, type of help given, or other features can be adapted based on the learner’s prior responses.

Interpretable Machine Learning (IML): Interpretable machine learning, sometimes also called interpretable AI, describes the creation of models that are inherently interpretable in that they provide their own explanations for their decisions. This approach is preferable to that of explainable machine learning (see definition below) for many reasons including the fact that we should understand what is happening from the beginning in our systems, rather than try to “explain” black boxes after the fact.

Black Boxes: We call things we don’t understand, “black boxes” because what happens inside the box cannot be seen. Many machine learning algorithms are “black boxes” meaning that we don’t have an understanding of how a system is using features of the data when making their decisions (generally, we do know what features are used but not how they are used)There are currently two primary ways to pull back the curtain on the black boxes of AI algorithms: interpretable machine learning (see definition above) and explainable machine learning (see definition below).

Machine Learning (ML): Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

It’s important to note that in machine learning, the algorithm is doing the work to improve and does not have the help of a human programmer. It is also important to note three more things. One, in most cases the algorithm is learning an association (when X occurs, it usually means Y) from training data that is from the past. Two, since the data is historical, it may contain biases and assumptions that we do not want to perpetuate. Three, there are many questions about involving humans in the loop with AI systems; when using ML to solve AI problems, a human may not be able to understand the rules the algorithm is creating and using to make decisions. This could be especially problematic if a human learner was harmed by a decision a machine made and there was no way to appeal the decision.

Illustration of the topology of a generic Artificial Neural Network. A first column of three stacked circles labeled input pointing to a second column of four stacked circles labeled hidden pointing to a final column of two circles labeled output. All circles are the same size and a pale yellow color with a thin black outline.
Illustration of the topology of a generic Artificial Neural Network. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Neural Networks (NN): Neural networks also called artificial neural networks (ANN) and are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.

Deep Learning: Deep learning models are a subset of neural networks. With multiple hidden layers, deep learning algorithms are potentially able to recognize more subtle and complex patterns. Like neural networks, deep learning algorithms involve interconnected nodes where weights are adjusted, but as mentioned earlier there are more layers and more calculations that can make adjustments to the output to determine each decision. The decisions by deep learning models are often very difficult to interpret as there are so many hidden layers doing different calculations that are not easily translatable into English rules (or another human-readable language).

Natural Language Processing (NLP): Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

NLP technologies help in many situations that include: scanning texts to turn them into editable text (optical character recognition), speech to text, voice-based computer help systems, grammatical correction (like auto-correct or grammarly), summarizing texts, and others.

Robots: Robots are embodied mechanical machines that are capable of doing a physical task for humans. “Bots” are typically software agents that perform tasks in a software application (e.g., in an intelligent tutoring system they may offer help). Bots are sometimes called conversational agents. Both robots and bots can contain AI, including machine learning, but do not have to have it. AI can help robots and bots perform tasks in more adaptive and complex ways.

User Experience Design/User Interface Design (UX/UI): User-experience/user-interface design refers to the overall experience users have with a product. These approaches are not limited to AI work. Product designers implement UX/UI approaches to design and understand the experiences their users have with their technologies.

Explainable Machine Learning (XML) or Explainable AI (XAI): Researchers have developed a set of processes and methods that allow humans to better understand the results and outputs of machine learning algorithms. This helps developers of AI-mediated tools understand how the systems they design work and can help them ensure that they work correctly and are meeting requirements and regulatory standards.

It is important to note that the term “explainable” in the context of explainable machine learning or explainable AI, refers to an understanding of how a model works and not to an explanation of how the model works. In theory, explainable ML/AI means that an ML/AI model will be “explained” after the algorithm makes its decision so that we can understand how the model works. This often entails using another algorithm to help explain what is happening as the “black box.” One issue with XML and XAI is that we cannot know for certain whether the explanation we are getting is correct, therefore we cannot technically trust either the explanation or the original model. Instead, researchers recommend the use of interpretable models.

Thank you to Michael Chang, Ph.D., a CIRCLS postdoctoral scholar, for reviewing this post and to Eric Nentrup for support with specific definitions. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

________________________________________________
1​​​​Fusco, J. (2020). Book Review: You Look Like a Thing and I Love You. CIRCLEducators Blog. Retrieved from https://circleducators.org/review-you-look-like-a-thing/

Artificial Intelligence and Education: What We’re Up To

notebook, lightbulb, atom, pencils inside a digital circuit board imageby Pati Ruiz

I was recently asked for an overview of the AI and Education landscape and how we are participating in it. In addition to promoting equity and accountability in AI, here is a summary of our recent writing and research including key ideas from our work. We believe that AI systems should support and augment, but never replace, a human. To ensure this, emerging technology systems and tools should be developed with the input of educators, learners, and families. As always, please share your thoughts with us @EducatorCIRCLS.

Writing and Presentations

AI and the Future of Teaching and Learning | A blog series we partnered on with the U.S. Department of Education’s Office of Educational Technology

Key Ideas:

  • Educational technology is evolving to include artificial intelligence.
  • Artificial intelligence will bring “human-like” features and agency into future technologies.
  • Policy will have an important role in guiding the uses of artificial intelligence in education to realize benefits while limiting risks.
  • Artificial intelligence will enable students and teachers to interact with technology in human-like ways.
  • Individuals will find it difficult to make choices that balance benefits and risks.
  • Creating policies can strengthen how people make decisions about artificial intelligence in education.
  • Educational applications of many types will be artificial intelligence-enabled, including teaching and learning, guiding and advising, and administration and resource planning applications.
  • Use of artificial intelligence systems in school technology is presently light, allowing time for policy to have an impact on safety, equity, and effectiveness.
  • Policies should encourage teacher engagement, including the development of teachers’ trust, and their confidence to recommend not using untrustworthy artificial intelligence systems.
  • Policies should incorporate experiences for educators to shape and support their own professional learning about how to utilize artificial intelligence systems in teaching and learning.
  • Including and informing educators in design and development decisions will result in more useful and usable teacher supports.

AI or Intelligence Augmentation for Education? | Communications of the ACM 

Key Ideas:

  • We recommend a focus on intelligence augmentation (IA) in education that would put educators’ professional judgment and learners’ voice at the center of innovative designs and features.
  • An IA system might save an educator administrative time (for example, in grading papers) and support their attention to their students’ struggles and needs.
  • An IA system might help educators notice when a student is participating less and suggest strategies for engagement, perhaps even based on what worked to engage the student in a related classroom situation.
  • We hope that IA for education will focus attention on how human and computational intelligence could come together for the benefit of learners.

Artificial Intelligence 101: Covering the Basics for Educators | Digital Promise Blog

Key Ideas:

  • AI lets machines make decisions and predictions.
  • Teachers are essential to education, and AI should be used to better support them.
  • Technology often comes with ethical implications and AI is no different, educators should ask questions and investigate AI tools and systems before they adopt them into a classroom.

Teachers Partnering with Artificial Intelligence: Augmentation and Automation | Educator CIRCLS Blog 

Key Ideas:

  • Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue.
  • AI systems should support or augment, but never replace, a teacher.
    These systems should be developed with the input of teachers, students, and families.

Artificial Intelligence and Adaptivity to Strengthen Equity in Student Learning | Getting Smart

Key Ideas:

  • Educators, researchers, and developers prioritize adaptivity when it comes to emerging learning technologies.
  • Incorporating AI tools requires specific and precise inputs to generate useful outputs.
  • When practitioners, learners, researchers, and developers work together with shared values, more equitable learning is possible.

Ethical AI | EngageAI Nexus Blog 

Key Ideas:

  • Ethical considerations should be front and center throughout the development of any new AI innovation, and ethics should be central to our definition of success for AI.
  • Policies and guidelines from the government, accreditation requirements in education, and standards of professional ethics are all needed to reinforce ethics in AI.
  • Public education is also important so that end-users can make informed decisions based on a full understanding of key issues such as transparency and privacy.

Definitions | Glossary of Artificial Intelligence Terms for Educators: A glossary written for educators to reference when learning about and using artificial intelligence (AI).

Presentation | Insights on Artificial Intelligence and the Future of Teaching and Learning at the 2023 Consortium for School Network (CoSN) Conference.

Listening Sessions | AI and the Future of Learning: Listening Sessions | We supported the U.S. Department of Education’s Office of Educational Technology listening sessions about Artificial Intelligence (AI). We connected with teachers, educational leaders, students, parents, technologists, researchers, and policymakers to gather input, ideas, and engage in conversations that will help the Department shape a vision for AI policy that is inclusive of emerging research and practices while also informed by the opportunities and risks.

Ongoing Research

Emerging technology adoption framework: For PK-12 education | Educator CIRCLS Emerging Technology Advisory Board

Key Ideas:

  • A framework we co-developed with education community members to help ensure that educational leaders, technology specialists, teachers, students, and families are all part of the evaluation and adoption process for placing emerging technologies (including artificial intelligence and machine learning) in PK-12 classrooms.
  • We are currently working with League member Willy Haug, Director of Technology and Innovation to modify this framework for adoption at Menlo Park City School District.

Study | ChatGPT/GPT-4 for Developing Sample Computational Thinking Lesson Plans at North Salem School District

  • I am working with Dr. Julio Vazquez, Director of Instruction and Human Resources North Salem School District, who is working with his team to develop sample computational thinking lessons across all subject areas K-12 using ChatGPT. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process.

Artificial Intelligence Practitioner Advisory Board | A group that will explore the use of emerging technologies in classrooms, and how we might leverage technologies to better support educators and their students. We hope to foster a sense of community within the group where researchers and developers can learn along with you as we all go through the process of reviewing technologies and making recommendations on their use. This Practitioner Advisory Board is supported by two NSF projects:

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Developing Sample Computational Thinking Lessons with ChatGPT

by Pati Ruiz, Merijke Coenraad, and Judi Fusco with contributions from Julio Vazquez

What is ChatGPT?

Let’s start with some definitions, ChatGPT is commonly classified as a natural language processing model, meaning it deals with language and human speech patterns, and “generative artificial intelligence”, meaning that it is AI that creates new content — in this case, new text.

More specifically, ChatGPT is a chat-based generative pre-trained transformer. Meaning that the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer). Basically, it’s a chatbot that allows a user to ask a question in plain language and get a response in a way similar to how a human would reply.

What does this mean for education?

“ChatGPT is a good prompt for conversation.
I see this tool as a starting point for teachers and students.”
-Julio Vazquez, North Salem Central School District

Despite the precedent of banning access to ChatGPT set by New York City Public Schools in January 2023, not all school districts are following suit. Some educators believe that these AI systems and tools are out in the world and the best thing educators can do is to teach students to partner with AI tools so they can be better prepared for a technological world. For example, English teacher Cherie Shields was recently interviewed by the New York Times where she shared that she assigned students in one of her classes to use Chat GPT to create outlines for a recent essay assignment. She shared that the process helped deepen students’ understanding of the stories while also teaching them to interact with an AI system by manipulating their inputs to get the responses they were looking for. In this case, ChatGPT became a tool that can support learning when we thoughtfully include it in our lessons and also guide students in using it well.

Dr. Julio Vazquez, Director of Instruction and Human Resources, and his team are encouraging experimentation and access to ChatGPT for all faculty and staff and are thinking about how to provide students with access in a manner that will not conflict with student privacy laws. Staff members are rolling their sleeves up and starting to explore and learn about how they can use it with their students productively. In fact, they are exploring the use of ChatGPT to develop sample Computational Thinking (CT) lesson plans that the team uses as a jumping off point in their CT Pathways development process.

ChatGPT for Developing Sample Computational Thinking Lesson Plans

compass pointing north
North Salem Central School District
In a recent conversation with Dr. Vazquez, we asked him more about how he and his teachers are incorporating ChatGPT in their computational thinking lesson planning process.

Dr. Vazquez and his colleague Cynthia Sandler, Library Media Specialist, started by entering prompts into ChatGPT and seeing what came out. The searches started with prompt terms that went something like “generate a 5th grade lesson for computational thinking focusing on science.

As the team began to analyze the lesson plans that came out, they realized they needed to make adjustments. Julio shared that he and his team have become better at giving ChatGPT enough context so that the lessons that are developed are closer to what the team expects of a lesson plan and the content better aligns to both CT and content area standards. For example, a more recent lesson prompt terms included:

“write a science lesson that integrates
9-12.CT.1
Create a simple digital model that
makes predictions of outcomes. and HS-PS1-5. Apply scientific principles and evidence to explain how the rate of a physical or chemical change is
affected when conditions are varied.”

The prompt terms and outputs were documented and provided a good starting point for sparking conversation. On first pass, the team collectively agreed that they liked the structure of the generated lesson plans. Beyond format and in terms of the content of computational thinking and subject area standards, the prompt terms entered into ChatGPT also included Habits of Mind, thinking dispositions which are implemented in North Salem, as well as the use of Visible Thinking Routines.

Dr. Vazquez and his team have worked with ChatGPT to develop sample computational thinking lessons across all subject areas K-12. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process. We look forward to sharing more! Until then, do you have questions for us? Are you integrating ChatGPT in your classroom, school, or district? Let us know @EducatorCIRCLS.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Teachers Partnering with Artificial Intelligence: Augmentation and Automation

3x2 grid of colorful AI and learning related icons including students speaking in different languages, and an ipad with an award stamp

By Pati Ruiz and Judi Fusco

Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue. Our starting point is that AI systems should support or augment, but never replace, a teacher. In order to ensure this, these systems should be developed with the input of teachers, students, and families.

So, what types of AI systems do teachers want to see developed? A group of teachers from the Merlyn Mind Practitioner Advisory Board shared ideas for how AI might help teachers better support their students. One scenario emerged around students who have Individualized Education Programs or Plans (IEPs)1. In this post we will describe how an AI system might support teachers and students by automating:

  1. Planning and Supporting Preferences
  2. Monitoring
  3. Documentation

Planning and Supporting Preferences

First, a teacher could input student instructional plans into the system. Then, the system can review the plans, make recommendations, and send alerts to the teacher when something may not work for a student. In the alert, the system could provide suggestions of adaptations on lessons or assignments based on the needs of each student. For example, an AI system can scan what’s coming up in an instructional unit and alert the teacher that the website they selected does not meet the accessibility standards required by the students in the classroom. A more advanced system could also suggest an alternative option, or even better, search for multiple resources that are appropriate and let the teacher decide what resources are best suited for their students’ instructional needs. In all cases, the AI system is only helping and making suggestions that the teacher may act on.

An AI system can also allow for a broader range of inputs from students for assignments based on their needs. For example, if a student accommodation includes submitting assignments as recorded audio, but the teacher prefers written assignments, an AI system can convert the student’s audio to text so the teacher can review or grade the text. The text-to-speech tool should also allow the teacher to hear the student’s voice for a particular sentence or phrase, for example, if the translation was not successful. Alternatively, if a student needs to hear the teacher’s comments on their assignments instead of reading them, the AI system can convert the text comments into spoken text for the student to hear. To additionally help the teacher, the system might suggest comments that they had written for another student so the teacher can reuse or repurpose them. The system might also remind the teacher of a student’s preference for feedback and if the student prefers verbal feedback, the teacher could read and record the comments for that more personal touch.

Monitoring

To support teachers in providing adequate accommodations for their students, an AI system can monitor student IEP information and make automated recommendations for needed support. For example, the system could identify students who require extended time and either share a list with the teacher or make appropriate adjustments to due dates for individual students in a learning management system. Here, we point out the need for AI systems to be able to interact with other systems or be embedded within them. Additionally, the system must do this in a way that does not expose sensitive information about students to the whole class.

Related to the text-to-speech and speech-to-text ideas discussed above, an AI system can also provide individualized read-aloud capabilities for students who need that support. The system could also remind the teacher to provide tools, like headphones or closed captioning for students who need to listen to content. We firmly believe that AI systems can help by doing things that machines are good at, while continuing to enable teachers to focus on what humans do best—like developing interpersonal relationships and identifying nuanced needs. With these types of automated supports, it is important to ensure that teachers have the ability to make the final decisions about students’ needs and that students have the agency to accept and decline supports as they go.

Documentation

Supporting a classroom with students who have varying needs—whether they are documented in an IEP or not—requires a significant amount of monitoring and reporting on the part of educators. An AI system could support teachers by not only monitoring the individual requirements of students, but also documenting the adjustments and accommodations that were made for each student. This documentation could then be shared with the students’ families to provide a summary of the work that students have accomplished and how they have been supported in completing that work. Of course, a teacher would review and verify that the summary produced by the AI system is accurate and flag any issues with the write-ups that would need to be addressed by the AI design team.

By the end of the instructional unit, teachers would be able to review reports of student progress, identify what worked and what didn’t, and ensure that all students are making meaningful progress. Automating, planning, tracking, and documentation can give a teacher more time to care for students; however, given the various risks AI systems bring, it is crucial that teachers also have the capability to override an AI system when needed.

Risks

The imagined AI system described helps teachers do what they do best by supporting them to ensure their students receive the accommodations they require and then documents those accommodations. Using such systems will come with risks, and AI systems that engage with student IEP data need to have the highest level of data privacy and oversight. As we discussed earlier, educators must be involved—for example, the teacher is in charge of giving feedback, but the system may make suggestions that help the teacher give better feedback. If educator experts are not in the loop, there could be harmful consequences for students. Educators must be diligent and not assume that every accommodation determined by an AI system is correct or the best decision. AI systems lack full context and the ability to make human decisions. Educators must have oversight and be able to verify and approve every decision made by the system.

Educator Voices

This blog post presents an imagined AI system based on conversations with a group of practitioners from the Merlyn Mind Practitioner Advisory Board. We need more teachers and educators involved in these conversations, so please consider this blog post as an invitation to you to connect with us and join the conversation on the future of AI in Education. In addition to Merlyn Mind, if you are interested in getting involved, please visit the links below.

1 An IEP is a legal document in the United States that is developed for all public school children who need special education. It is created by district personnel with input from the child’s guardians and is reviewed every year. For more information see https://www2.ed.gov/about/offices/list/ocr/docs/edlite-FAPE504.html

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

How Can AI Systems Support Teachers: 5 Big Ideas from the Learning Sciences

This post was originally published on the Digital Promise website.

By Pati Ruiz and Judi Fusco

The learning sciences study the design and implementation of effective learning environments by drawing on a variety of perspectives across a range of physical, social, and technological spaces1. Learning sciences focuses on human learning and helps individuals achieve their fullest potential and attain 21st-century skills. Because of this focus, the learning sciences should be foundational in the design and development of emerging technologies for teaching and learning. AI systems are an emerging technology that are starting to play a significant role in the redesign of learning environments. To increase our chances of creating successful AI systems for learning, they should be grounded in the learning sciences. We’ll discuss five big ideas from the learning sciences in relation to the design of AI systems: Representation and Supports; Collaboration; How Students Think; Building on Students’ Cultural and Linguistic Assets; Assessment and Feedback. We propose that these big ideas are important as a starting point in the design of better AI systems.

Big Idea 1: Representation and Supports

The learning sciences have found that enabling students to make connections across multiple representations (for example, graphs, writing, images, maps, blocks, etc.) contributes to knowledge construction. Different forms of representation give them a way to make sense of concepts in the best way that helps them construct their knowledge. How can this big idea be used in designing an AI system?

In a classroom where a teacher is fielding questions from students about a specific species of frog, an AI system can support the teacher by quickly searching for and projecting a set of visual representations of the frog that are appropriate for the students and have high-quality information for the teacher and students. When teaching about a metabolic function, an animation might help, and the AI system could share the animation and also point to text or other representations that may help students make connections to understand the process. By giving students and teachers just-in-time support like a relevant animation or engaging follow-up questions, AI systems can support teachers to orchestrate learning experiences by automating tasks (as described above) so teachers can spend more time focused on students. Beyond those types of just-in-time supports, AI systems can further support the engagement of all students in sustained creative work—something that has been a challenging problem in the design of learning environments.

Big Idea 2: Collaboration

The learning sciences have found that learning environments should be designed to foster collaboration and help learners work together to share and make sense of challenging problems. Research points us toward more social and collaborative learning environments.

AI systems could support this big idea by making recommendations for how teachers group students or by giving students themselves prompts that may lead to shared understanding when working in groups without the teacher. Emerging AI technologies might help teachers ask different groups the right guiding questions as the AI system “listens” to what each group is discussing. An AI system that asks questions might also be able to assess the answers to those questions, help students arrive at the same conceptual understanding, and determine when the group is ready for a new task.

Big Idea 3: How Students Think

The learning sciences have found that learning environments should be not only collaborative, but also foster adaptive or personalized learning because there is not a single way to learn and individuals have unique needs when it comes to learning environment designs.

AI systems might support teachers in facilitating this big idea by finding instances of student reasoning for teachers to review based on the analysis of video, audio, or student work. AI systems can also quickly provide insights to teachers about what learning path a student is taking and analytics could help teachers understand how each of their students tends to learn a concept based on their writing, speaking, or movements. A teacher might take the feedback given by an AI system and follow-up with students about their individual learning process and make decisions with them about what to do next. By helping students keep track of how they are practicing and providing scaffolds when they are needed and removing them when a student is ready, an AI system can support students’ unique learning needs.

Big Idea 4: Building on Students’ Cultural and Linguistic Assets

The learning sciences have found that learning and teaching are cultural processes and that we best support learning when we include students’ cultural and linguistic backgrounds as pedagogical assets. This big idea means that AI systems need to support learning environments that enable teachers and learners to address the multiple elements of learning, including identity and culture. To do this, developers need to restructure the assumptions that are made about learners and what they know by keeping both teachers and learners in the loop. For example, AI systems can help personalize the materials for Spanish-speaking students and their parents by translating sections of text, or by providing just-in-time translations so that they can more fully participate in learning experiences. Another personalization could be an AI system where the agent speaks to students and engages them using speech patterns similar to those of the student.

Big Idea 5: Assessment and Feedback

There’s been a lot of discussion around how AI systems can support teachers and students with new types of assessment, such as more stealth assessment, or formative assessment. Formative assessment provides specific information to a teacher about students’ strengths and challenges to help a teacher adapt their instruction to meet students’ needs. Students’ needs vary and they can also be challenging to anticipate—this is where AI systems can support teachers and readers. We won’t get into assessment and feedback more here, but check out this Educator CIRCLS webinar on Assessment or read this post on AI and Formative Assessment to learn more about this big idea.

Looking ahead

These big ideas from the learning sciences should be incorporated into AI systems to create better, user-centered products. In addition, educators need to be involved in the process because they have valuable insights about what is working and not working in ways that complement researchers’ expertise. Merlyn Mind Practitioner Advisory Board member, Grace Magley reminds us that teachers “have to see real benefits, not just new tech” and “they need to be shown how it would work in a classroom full of diverse learners. They need to see benefits for the teacher as well as the learners.”

This blog post is an invitation to you to connect with us and join the conversation on the future of AI in Education. If you are interested in getting involved, please visit:

Educator CIRCLS
AI CIRCLS
1Sawyer, R. (Ed.). (2014). The Cambridge Handbook of the Learning Sciences (2nd ed., Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139519526

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

I’m a Teacher, Will Artificial Intelligence Help Me?

Robot caricature in a yellow circle thinks of 0's and 1's, a teacher in a red heart thinks of people
by Judi Fusco and Pati Ruiz

Artificial Intelligence (AI) systems are becoming more prevalent everywhere including education. Educators often seem to wonder, “What is it?” and, “What can it do?” Let’s address these questions and then discuss why and how YOU should be involved!

What is it and what can it do for teachers?

Artificial intelligence (AI) is a field of computer science that lets machines make decisions and predictions. The goal of AI is to create machines that can mimic human capabilities. To do this, AI systems use many different techniques. You are probably using AI systems every day because they are embedded in our mobile phones and cars and include things like face recognition to unlock your phone, digital voice assistants, and mapping/route recommendations. We’re not going to go into the details of how AI works in this post, but you can read a prior post on AI and check out this glossary of AI terms that might be helpful if you want more background on the topic. In this post, we will focus on examples of AI systems that can help teachers.

Teachers have to do countless tasks, such as lesson planning, teaching, grading mentoring, classroom management, keeping up with technology in the classroom and new pedagogical practices, monitoring progress, and administrative work, all while keeping students’ social and emotional needs in mind. While AI has come a long way since the 1950s when the term was coined and work on Intelligent Tutoring Systems began, it cannot replace a teacher in the classroom. We will share examples of how existing AI systems have successfully helped teachers and reduced their load.

Example: Personalized online math learning software for middle and high school students

Mathia provides coaching to students as they solve math problems and gives teachers a detailed picture of where each student is, as well as suggestions for conversation starters to talk about each student’s understanding. This support allows teachers to spend more time with students focused on learning, while also directly giving the students additional, useful feedback as they solve math problems.

Example: A platform that provides immediate feedback to students and assessment data to teachers

Another AI system that supports both teachers and students is ASSISTments. It is also currently focused on math. For students, it gives assistance in the form of hints and instant feedback while they do math homework. For teachers, it gives information about which homework problems were difficult and what the most common wrong answers were. This can prompt teachers to spend time discussing the problems that students need the most help on, and teachers can be sure to re-teach concepts based on common wrong answers.

In addition to teaching content, when you think about all the things a teacher does in managing their classroom and all the “plates” they must juggle to keep 25, 30, or more students on task, engaged, and learning, you can imagine they could use some support. These next three systems described primarily support teachers.

Example: A digital assistant for teachers

One AI system that helps with classroom management tasks is a multimodal digital assistant specifically developed for teachers with privacy in mind, called Merlyn. Merlyn looks like a small speaker, but does so much more. It allows teachers to use voice and a remote control to control content from a distance. For example, with Merlyn teachers can set timers and switch displays between their laptop, document camera, and interactive whiteboard. Teachers can control a web browser on their laptop and do things like share a presentation, go to a specific point in a video, show a website, or search. This frees them up to walk around the classroom and interact with students more easily.

Other ways AI systems can support teaching and learning

The examples above show three categories of how AI systems have helped teachers and their students. Three more examples include, an AI system that can analyze the conversation from a classroom session and identify the amount that a teacher talked versus a student (i.e. TeachFX). This tool also identifies whether teachers let students build on each other’s thoughts leading to discussions. With the help of this AI system, teachers can work to engage their students in discussions and reflect on their practice.

Grading is another task that is very important but very time consuming. Gradescope, for example, supports instructors in grading their existing paper-based and digital assignments in less time than it normally takes them. It does this by scanning text and sorting similar responses together for the teacher to grade some of each type, the system then “learns” from the teacher, automatically grades the rest, and sends the grading to the teacher for review.

Finally, AI systems that are specialized within a subject matter can allow teachers to set up content-specific learning experiences. For example in the domain of science, Inq-ITS, allows teachers to select digital labs for their middle school students. When completing the assigned digital labs, students learn by doing. Inq-ITS autoscores the labs in real-time and shows the teacher performance updates for each student. A teacher can use the reports to provide the appropriate support to students who need additional help. Inq-ITS also supports students with hints while performing the labs.

Educators Must be Involved in the Design of AI Systems

The AI systems described above, support or augment, but never replace a teacher. We believe that AI systems can help by doing things that machines are good at while having teachers do the things that humans do best.

The AI systems above are also designed by teams that have made education and learning environments the main audience for their systems. They have also included teachers in their design process. There are other AI tools that exist and even more that are being developed to support teachers and students on other activities and tasks, but some don’t have the same focus on education. We think that it’s important that in the design of AI systems for classrooms, educators – the end-users – need to be involved in the design.

Some of the teams that design AI systems for education haven’t been in a classroom recently and when they were they probably weren’t the teacher. To make a technology that works in classrooms requires classroom experts (the main users) to be part of the design process and not an afterthought. When teachers give feedback, they help ensure 1) that systems work in ways that make sense for classrooms in general, and 2) that systems would work well in their specific classroom situations. (We’ll discuss why this is the case in another future blog post.)

A final, yet very important reason for educators to be involved, is that while AI systems can bring opportunities to support teaching and learning, there are also privacy, ethics, equity, and bias issues to be aware of. We don’t want to add anything to your already full plate, but as technologies come into your classroom, you should ask questions about how the system supports students, if the systems were designed for students like your students, what the privacy policies are, and any implications that might affect your students.

We understand that most teachers don’t have a single extra minute but it is crucial to have current teachers in the design process. If you want to learn and think about AI systems, as they become more prevalent, you will become an even more invaluable teacher or technology leader in your school/district. Your voice is important and getting more educators involved makes a more powerful collective voice.

Looking ahead

If you’re still reading this blog, you probably have an interest in AI systems; below we suggest a few places to connect. Teachers are critical to the design of effective AI technologies for schools and classrooms. We hope this post has given you some insights into how AI systems might support you and your students. If you are interested in getting involved, we have some links for you below. Consider this blog post an invitation to you to connect with us and join the conversation; we hope you’ll join us in thinking about the future of AI in Education.

In our next post we will discuss how AI systems informed by learning science principles may help solve problems in learning environments.

Let us know your thoughts @educatorCIRCLS.

Ways to join:
Educator CIRCLS
AI CIRCLS
Join the ASSISTments Teacher Community
Leadership Programs — TeachFX

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Book Review: Design Justice: Community-Led Practices to Build the Worlds We Need

by Pati Ruiz
Book Cover Design Justice

Critical pedagogy seeks to transform consciousness, to provide students with ways of knowing that enable them to know themselves better and live in the world more fully.
bell hooks, Teaching to Transgress

Written by Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need, explores the relationships among design, power, and social justice. I was drawn to this book because it centers those who are intersectionally disadvantaged, this refers to individuals that might have multiple minoritized identities and originally refered to the oppression of African American women. It also shares the work done by Design Justice Network (DJN) to “build a better world, a world where many worlds fit.” The Design Justice Network is “an international community of people and organizations who are committed to rethinking design processes so that they center people who are too often marginalized by design.” This network is a community of practice that is guided by a set of 10 principles; I am a DJN signatory. I hope this short post prompts you to read the whole book that was made available for free on PubPub (the open-source, privacy-respecting, all-in-one collaborative publishing platform) or sign up for the Design Justice Network newsletter to learn more. This book has really inspired me to think differently about design and what it takes to truly make design accessible.

What is Design Justice?

The book begins with a definition, or “tentative description” of design justice:

Design justice is a framework for analysis of how the design of technologies, tools, and learning environments (to name a few) distributes benefits and burdens between various groups of people. Design justice focuses explicitly on the ways that design reproduces and/or challenges the matrix of domination (white supremacy, heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural inequality). Design justice is also a growing community of practice that aims to ensure a more equitable distribution of design’s benefits and burdens; meaningful participation in design decisions; and recognition of community-based, Indigenous, and diaspora design traditions, knowledge, and practices (p. 23).

After a comprehensive overview of the values, practices, narratives, and site/locations of design, the book turns to the pedagogies of design in Chapter 5. In this chapter, the author focuses on answering the question: How do we teach and learn about design justice?

Costanza-Chock responds to this question by writing “I don’t believe there is only one way to answer this question, which is why I use “pedagogies” in the plural form.” Among the pedagogies described in the chapter are:

  • Paulo Freire’s educación popular or popular education (pop ed)
  • critical community technology pedagogy
  • participatory action design
  • data feminism
  • constructionism, and
  • digital media literacy

Exploring Design Justice Pedagogies

In our previous work, as CIRCL Educators, we wrote about constructionism. This pedagogy is one that teachers often turn to and as Costanza-Chock notes, it is not one that is “explicit about race, class, gender, or disability politics.” However, it should center the social and cultural aspects of learning, the construction of knowledge in the learner, and the learner’s contexts (e.g.a student’s racial/ethnic background, social class, and other social identities). Furthermore, Costanza-Chock writes that “in a constructionist pedagogy of design justice, learners should make knowledge about design justice for themselves and do so through working on meaningful projects. Ideally, these should be developed together with, rather than for, communities that are too often excluded from design processes.”

Hand in hand with the pedagogies described in this chapter is the decolonization of design practices, which refers to deconstructing Western privilege of thoughts and approaches. Those involved in the decolonizing design movement advocate for a global approach to design that rethink historical narratives and seek to center design practices erased or ignored in Eurocentric design practices. As Costanza-Chock describes “design justice pedagogies must support students to actively develop their own critical analysis of design, power, and liberation, in ways that connect with their own lived experience.” As teachers and educators, our role is to figure out a way to overcome existing design challenges so that our students can implement just design principles.

Principles of Design Justice

What are practical examples of what teaching about design justice looks like? Based on the author’s experiences in her own courses, 10 principlesillustrate what this movement envisions:

Principle 1: We use design to sustain, heal, and empower our communities, as well as to seek liberation from exploitative and oppressive systems.
Principle 2: We center the voices of those who are directly impacted by the outcomes of the design process.
Principle 3: We prioritize design’s impact on the community over the intentions of the designer.
Principle 4: We view change as emergent from an accountable, accessible, and collaborative process, rather than as a point at the end of a process.
Principle 5: We see the role of the designer as a facilitator rather than an expert.
Principle 6: We believe that everyone is an expert based on their own lived experience, and that we all have unique and brilliant contributions to bring to a design process.
Principle 7: We share design knowledge and tools with our communities.
Principle 8: We work towards sustainable, community-led and -controlled outcomes.
Principle 9: We work towards non-exploitative solutions that reconnect us to the earth and to each other.
Principle 10: Before seeking new design solutions, we look for what is already working at the community level. We honor and uplift traditional, indigenous, and local knowledge and practices.

At Educator CIRCLS we are at the beginning of our conversation around AI in Education. These design justice principles will be front of mind as we continue to consider and discuss the variety of ways AI technologies are currently being developed and employed. Please let us know your thoughts by tweeting @EducatorCIRCLS and sign up for the CIRCLS newsletter to stay updated on emerging technologies for teaching and learning.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Reflections on Coded Bias

Coded Bias film ad Watch with us

“Algorithmic justice––making sure there’s oversight in the age of automation––is one of the largest civil rights concerns we have.”Joy Buolamwini

On May 3rd, 2021 Educator CIRCLS hosted a watch party for the film Coded Bias which highlights the incredible work being done by organizations, data scientists, and activists on an international scale. The film challenged our unconscious biases and encouraged us to listen to one another as we consider the ways that we interact with artificial intelligence (AI) on a daily basis. To begin with, the film made very clear the wide societal impacts, both positive and negative, of AI as well as the fact that AI algorithms can perpetuate biases. Given this, we believe it is essential to become more knowledgeable about AI so that we, as educators, can make informed decisions about AI. As we watched this film we considered and discussed the ethical implications that need to be fully investigated before new AI tools are adopted in our classrooms. This film also helped us see that we also need to investigate the people designing the AI and helped us arrive at some important questions that we need to be asking about AI.

Here are some questions:

  • How was the AI system designed, for classroom use or other situations? At what point are teachers brought in to make decisions about their students?
  • What data was used when the system was trained?
    • What groups of people were included during the testing process?
  • What data will be collected by the system and what will happen to that data if the tool is sold? Will it only be used for only the purpose specified? Are there any potential dangers to the students? Are there any potential dangers to the teachers who use the systems with their students?
    • Can students be identified from this data?
    • Can teachers be identified from this data?
    • Can this data be used to evaluate teachers’ performance (something that may not be specified by the system)?
  • How does the system interact with students, and can I give feedback to the system or override the decisions?

Another very important question but a difficult one to answer is: When this AI tool fails, how does it fail, and what are the consequences? While EdTech designers might not be able to accurately answer this question, you might be able to use it to start a conversation about the pitfalls of this particular piece of technology. It will also challenge EdTech designers to think about these difficult questions and engage the design process to adjust their product if needed. After all, starting these conversations about the ethics of AI and where its faults lie is our duty.

Sign up for the CIRCLS newsletter to stay updated on emerging technologies for teaching and learning and let us know what you think by tweeting @EducatorCIRCLS

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]