Category Archives: Written by: Judi Fusco

Glossary of Artificial Intelligence Terms for Educators

by Pati Ruiz and Judi Fusco

This glossary was written for educators to reference when learning about and using artificial intelligence (AI). We will start with a definition of artificial intelligence and then provide definitions of AI-related terms in alphabetical order. This glossary was last updated on March 31, 2024

Artificial Intelligence (AI): AI is a branch of computer science. AI systems use hardware, algorithms, and data to create “intelligence” to do things like make decisions, discover patterns, and perform some sort of action. AI is a general term and there are more specific terms used in the field of AI. AI systems can be built in different ways, two of the primary ways are: (1) through the use of rules provided by a human (rule-based systems); or (2) with machine learning algorithms. Many newer AI systems use machine learning (see definition of machine learning below).

________________________________________________
Algorithm: Algorithms are the “brains” of an AI system and what determines decisions in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based where human programmers give the rules.

Artificial General Intelligence (AGI): Artificial general intelligence has not yet been realized and would be when an AI system can learn, understand, and solve any problem that a human can.

Artificial Narrow Intelligence (ANI): AI can solve narrow problems and this is called artificial narrow intelligence. For example, a smartphone can use facial recognition to identify photos of an individual in the Photos app, but that same system cannot identify sounds.

Generative AI (GenAI): A type of machine learning that generates content, currently such as text, images, music, videos, and can create 3D models from 2D input. See ChatGPT definition, ChatGPT is a specific example of GenAI.

Chat-based generative pre-trained transformer (ChatGPT) models: A system built with a neural network transformer type of AI model that works well in natural language processing tasks (see definitions for neural networks and Natural Language Processing below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

Transformer models: Used in GenAI (the T stands for Transformer), transformer models are a type of language model. They are neural networks and also classified as deep learning models. They give AI systems the ability to determine and focus on important parts of the input and output using something called a self-attention mechanism to help.

Self-attention mechanism: These mechanisms, also referred to as attention help systems determine the important aspects of input in different ways. There are several types and were inspired by how humans can direct their attention to important features in the world, understand ambiguity, and encode information.

Large language models (LLMs) Large language models form the foundation for generative AI (GenAI) systems. GenAI systems include some chatbots and tools including OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s PaLM and Gemini. LLMs are artificial neural networks. At a very basic level, the LLM detected statistical relationships between how likely a word is to appear following the previous word in their training. As they answer questions or write text, LLM’s use the model of the likelihood of a word occurring to predict the next word to generate. LLMs are a type of foundation model, which are pre-trained with deep learning techniques on massive data sets of text documents. Sometimes, companies include data sets of text without the creator’s consent.

Computer Vision: Computer Vision is a set of computational challenges concerned with teaching computers how to understand visual information, including objects, pictures, scenes, and movement (including video). Computer Vision (often thought of as an AI problem) uses techniques like machine learning to achieve this goal.

Critical AI: Critical AI is an approach to examining AI from a perspective that focuses on reflective assessment and critique as a way of understanding and challenging existing and historical structures within AI. Read more about critical AI.

Data: Data are units of information about people or objects that can be used by AI technologies.

Training Data: This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)1​​

Foundation Models: Foundation Models represent a large amount of data that can be used as a foundation for developing other models. For example, generative AI systems use large language foundation models. They can be a way to speed up the development of new systems, but there is controversy about using foundation models since depending on where their data comes from, there are different issues of trustworthiness and bias. Jitendra Malik, Professor of Computer Science at UC Berkeley once said the following about foundation models: “These models are really castles in the air, they have no foundation whatsoever.

Human-centered Perspective: A human-centered perspective sees AI systems working with humans and helping to augment human skills. People should always play a leading role in education, and AI systems should not replace teachers.

Intelligence Augmentation (IA): Augmenting means making something greater; in some cases, perhaps it means making it possible to do the same task with less effort. Maybe it means letting a human (perhaps teacher) choose to not do all the redundant tasks in a classroom but automate some of them so they can do more things that only a human can do. It may mean other things. There’s a fine line between augmenting and replacing and technologies should be designed so that humans can choose what a system does and when it does it.

Intelligent Tutoring Systems (ITS): A computer system or digital learning environment that gives instant and custom feedback to students. An Intelligent Tutoring System may use rule-based AI (rules provided by a human) or use machine learning under the hood. By under the hood we mean the underlying algorithms and code that an ITS is built with. ITSs can support adaptive learning.

Adaptive Learning: Subject or course material is adjusted based on the performance of the learner. The difficulty of material, the pacing, sequence, type of help given, or other features can be adapted based on the learner’s prior responses.

Interpretable Machine Learning (IML): Interpretable machine learning, sometimes also called interpretable AI, describes the creation of models that are inherently interpretable in that they provide their own explanations for their decisions. This approach is preferable to that of explainable machine learning (see definition below) for many reasons including the fact that we should understand what is happening from the beginning in our systems, rather than try to “explain” black boxes after the fact.

Black Boxes: We call things we don’t understand, “black boxes” because what happens inside the box cannot be seen. Many machine learning algorithms are “black boxes” meaning that we don’t have an understanding of how a system is using features of the data when making their decisions (generally, we do know what features are used but not how they are used)There are currently two primary ways to pull back the curtain on the black boxes of AI algorithms: interpretable machine learning (see definition above) and explainable machine learning (see definition below).

Machine Learning (ML): Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

It’s important to note that in machine learning, the algorithm is doing the work to improve and does not have the help of a human programmer. It is also important to note three more things. One, in most cases the algorithm is learning an association (when X occurs, it usually means Y) from training data that is from the past. Two, since the data is historical, it may contain biases and assumptions that we do not want to perpetuate. Three, there are many questions about involving humans in the loop with AI systems; when using ML to solve AI problems, a human may not be able to understand the rules the algorithm is creating and using to make decisions. This could be especially problematic if a human learner was harmed by a decision a machine made and there was no way to appeal the decision.

Illustration of the topology of a generic Artificial Neural Network. A first column of three stacked circles labeled input pointing to a second column of four stacked circles labeled hidden pointing to a final column of two circles labeled output. All circles are the same size and a pale yellow color with a thin black outline.
Illustration of the topology of a generic Artificial Neural Network. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Neural Networks (NN): Neural networks also called artificial neural networks (ANN) and are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.

Deep Learning: Deep learning models are a subset of neural networks. With multiple hidden layers, deep learning algorithms are potentially able to recognize more subtle and complex patterns. Like neural networks, deep learning algorithms involve interconnected nodes where weights are adjusted, but as mentioned earlier there are more layers and more calculations that can make adjustments to the output to determine each decision. The decisions by deep learning models are often very difficult to interpret as there are so many hidden layers doing different calculations that are not easily translatable into English rules (or another human-readable language).

Natural Language Processing (NLP): Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

NLP technologies help in many situations that include: scanning texts to turn them into editable text (optical character recognition), speech to text, voice-based computer help systems, grammatical correction (like auto-correct or grammarly), summarizing texts, and others.

Robots: Robots are embodied mechanical machines that are capable of doing a physical task for humans. “Bots” are typically software agents that perform tasks in a software application (e.g., in an intelligent tutoring system they may offer help). Bots are sometimes called conversational agents. Both robots and bots can contain AI, including machine learning, but do not have to have it. AI can help robots and bots perform tasks in more adaptive and complex ways.

User Experience Design/User Interface Design (UX/UI): User-experience/user-interface design refers to the overall experience users have with a product. These approaches are not limited to AI work. Product designers implement UX/UI approaches to design and understand the experiences their users have with their technologies.

Explainable Machine Learning (XML) or Explainable AI (XAI): Researchers have developed a set of processes and methods that allow humans to better understand the results and outputs of machine learning algorithms. This helps developers of AI-mediated tools understand how the systems they design work and can help them ensure that they work correctly and are meeting requirements and regulatory standards.

It is important to note that the term “explainable” in the context of explainable machine learning or explainable AI, refers to an understanding of how a model works and not to an explanation of how the model works. In theory, explainable ML/AI means that an ML/AI model will be “explained” after the algorithm makes its decision so that we can understand how the model works. This often entails using another algorithm to help explain what is happening as the “black box.” One issue with XML and XAI is that we cannot know for certain whether the explanation we are getting is correct, therefore we cannot technically trust either the explanation or the original model. Instead, researchers recommend the use of interpretable models.

Thank you to Michael Chang, Ph.D., a CIRCLS postdoctoral scholar, for reviewing this post and to Eric Nentrup for support with specific definitions. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

________________________________________________
1​​​​Fusco, J. (2020). Book Review: You Look Like a Thing and I Love You. CIRCLEducators Blog. Retrieved from https://circleducators.org/review-you-look-like-a-thing/

Teachers Partnering with Artificial Intelligence: Augmentation and Automation

3x2 grid of colorful AI and learning related icons

By Pati Ruiz and Judi Fusco

Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue. Our starting point is that AI systems should support or augment, but never replace, a teacher. In order to ensure this, these systems should be developed with the input of teachers, students, and families.

So, what types of AI systems do teachers want to see developed? A group of teachers from the Merlyn Mind Practitioner Advisory Board shared ideas for how AI might help teachers better support their students. One scenario emerged around students who have Individualized Education Programs or Plans (IEPs)1. In this post we will describe how an AI system might support teachers and students by automating:

  1. Planning and Supporting Preferences
  2. Monitoring
  3. Documentation

Planning and Supporting Preferences

First, a teacher could input student instructional plans into the system. Then, the system can review the plans, make recommendations, and send alerts to the teacher when something may not work for a student. In the alert, the system could provide suggestions of adaptations on lessons or assignments based on the needs of each student. For example, an AI system can scan what’s coming up in an instructional unit and alert the teacher that the website they selected does not meet the accessibility standards required by the students in the classroom. A more advanced system could also suggest an alternative option, or even better, search for multiple resources that are appropriate and let the teacher decide what resources are best suited for their students’ instructional needs. In all cases, the AI system is only helping and making suggestions that the teacher may act on.

An AI system can also allow for a broader range of inputs from students for assignments based on their needs. For example, if a student accommodation includes submitting assignments as recorded audio, but the teacher prefers written assignments, an AI system can convert the student’s audio to text so the teacher can review or grade the text. The text-to-speech tool should also allow the teacher to hear the student’s voice for a particular sentence or phrase, for example, if the translation was not successful. Alternatively, if a student needs to hear the teacher’s comments on their assignments instead of reading them, the AI system can convert the text comments into spoken text for the student to hear. To additionally help the teacher, the system might suggest comments that they had written for another student so the teacher can reuse or repurpose them. The system might also remind the teacher of a student’s preference for feedback and if the student prefers verbal feedback, the teacher could read and record the comments for that more personal touch.

Monitoring

To support teachers in providing adequate accommodations for their students, an AI system can monitor student IEP information and make automated recommendations for needed support. For example, the system could identify students who require extended time and either share a list with the teacher or make appropriate adjustments to due dates for individual students in a learning management system. Here, we point out the need for AI systems to be able to interact with other systems or be embedded within them. Additionally, the system must do this in a way that does not expose sensitive information about students to the whole class.

Related to the text-to-speech and speech-to-text ideas discussed above, an AI system can also provide individualized read-aloud capabilities for students who need that support. The system could also remind the teacher to provide tools, like headphones or closed captioning for students who need to listen to content. We firmly believe that AI systems can help by doing things that machines are good at, while continuing to enable teachers to focus on what humans do best—like developing interpersonal relationships and identifying nuanced needs. With these types of automated supports, it is important to ensure that teachers have the ability to make the final decisions about students’ needs and that students have the agency to accept and decline supports as they go.

Documentation

Supporting a classroom with students who have varying needs—whether they are documented in an IEP or not—requires a significant amount of monitoring and reporting on the part of educators. An AI system could support teachers by not only monitoring the individual requirements of students, but also documenting the adjustments and accommodations that were made for each student. This documentation could then be shared with the students’ families to provide a summary of the work that students have accomplished and how they have been supported in completing that work. Of course, a teacher would review and verify that the summary produced by the AI system is accurate and flag any issues with the write-ups that would need to be addressed by the AI design team.

By the end of the instructional unit, teachers would be able to review reports of student progress, identify what worked and what didn’t, and ensure that all students are making meaningful progress. Automating, planning, tracking, and documentation can give a teacher more time to care for students; however, given the various risks AI systems bring, it is crucial that teachers also have the capability to override an AI system when needed.

Risks

The imagined AI system described helps teachers do what they do best by supporting them to ensure their students receive the accommodations they require and then documents those accommodations. Using such systems will come with risks, and AI systems that engage with student IEP data need to have the highest level of data privacy and oversight. As we discussed earlier, educators must be involved—for example, the teacher is in charge of giving feedback, but the system may make suggestions that help the teacher give better feedback. If educator experts are not in the loop, there could be harmful consequences for students. Educators must be diligent and not assume that every accommodation determined by an AI system is correct or the best decision. AI systems lack full context and the ability to make human decisions. Educators must have oversight and be able to verify and approve every decision made by the system.

Educator Voices

This blog post presents an imagined AI system based on conversations with a group of practitioners from the Merlyn Mind Practitioner Advisory Board. We need more teachers and educators involved in these conversations, so please consider this blog post as an invitation to you to connect with us and join the conversation on the future of AI in Education. In addition to Merlyn Mind, if you are interested in getting involved, please visit the links below.

1 An IEP is a legal document in the United States that is developed for all public school children who need special education. It is created by district personnel with input from the child’s guardians and is reviewed every year. For more information see https://www2.ed.gov/about/offices/list/ocr/docs/edlite-FAPE504.html

How Can AI Systems Support Teachers: 5 Big Ideas from the Learning Sciences

This post was originally published on the Digital Promise website.

By Pati Ruiz and Judi Fusco

The learning sciences study the design and implementation of effective learning environments by drawing on a variety of perspectives across a range of physical, social, and technological spaces1. Learning sciences focuses on human learning and helps individuals achieve their fullest potential and attain 21st-century skills. Because of this focus, the learning sciences should be foundational in the design and development of emerging technologies for teaching and learning. AI systems are an emerging technology that are starting to play a significant role in the redesign of learning environments. To increase our chances of creating successful AI systems for learning, they should be grounded in the learning sciences. We’ll discuss five big ideas from the learning sciences in relation to the design of AI systems: Representation and Supports; Collaboration; How Students Think; Building on Students’ Cultural and Linguistic Assets; Assessment and Feedback. We propose that these big ideas are important as a starting point in the design of better AI systems.

Big Idea 1: Representation and Supports

The learning sciences have found that enabling students to make connections across multiple representations (for example, graphs, writing, images, maps, blocks, etc.) contributes to knowledge construction. Different forms of representation give them a way to make sense of concepts in the best way that helps them construct their knowledge. How can this big idea be used in designing an AI system?

In a classroom where a teacher is fielding questions from students about a specific species of frog, an AI system can support the teacher by quickly searching for and projecting a set of visual representations of the frog that are appropriate for the students and have high-quality information for the teacher and students. When teaching about a metabolic function, an animation might help, and the AI system could share the animation and also point to text or other representations that may help students make connections to understand the process. By giving students and teachers just-in-time support like a relevant animation or engaging follow-up questions, AI systems can support teachers to orchestrate learning experiences by automating tasks (as described above) so teachers can spend more time focused on students. Beyond those types of just-in-time supports, AI systems can further support the engagement of all students in sustained creative work—something that has been a challenging problem in the design of learning environments.

Big Idea 2: Collaboration

The learning sciences have found that learning environments should be designed to foster collaboration and help learners work together to share and make sense of challenging problems. Research points us toward more social and collaborative learning environments.

AI systems could support this big idea by making recommendations for how teachers group students or by giving students themselves prompts that may lead to shared understanding when working in groups without the teacher. Emerging AI technologies might help teachers ask different groups the right guiding questions as the AI system “listens” to what each group is discussing. An AI system that asks questions might also be able to assess the answers to those questions, help students arrive at the same conceptual understanding, and determine when the group is ready for a new task.

Big Idea 3: How Students Think

The learning sciences have found that learning environments should be not only collaborative, but also foster adaptive or personalized learning because there is not a single way to learn and individuals have unique needs when it comes to learning environment designs.

AI systems might support teachers in facilitating this big idea by finding instances of student reasoning for teachers to review based on the analysis of video, audio, or student work. AI systems can also quickly provide insights to teachers about what learning path a student is taking and analytics could help teachers understand how each of their students tends to learn a concept based on their writing, speaking, or movements. A teacher might take the feedback given by an AI system and follow-up with students about their individual learning process and make decisions with them about what to do next. By helping students keep track of how they are practicing and providing scaffolds when they are needed and removing them when a student is ready, an AI system can support students’ unique learning needs.

Big Idea 4: Building on Students’ Cultural and Linguistic Assets

The learning sciences have found that learning and teaching are cultural processes and that we best support learning when we include students’ cultural and linguistic backgrounds as pedagogical assets. This big idea means that AI systems need to support learning environments that enable teachers and learners to address the multiple elements of learning, including identity and culture. To do this, developers need to restructure the assumptions that are made about learners and what they know by keeping both teachers and learners in the loop. For example, AI systems can help personalize the materials for Spanish-speaking students and their parents by translating sections of text, or by providing just-in-time translations so that they can more fully participate in learning experiences. Another personalization could be an AI system where the agent speaks to students and engages them using speech patterns similar to those of the student.

Big Idea 5: Assessment and Feedback

There’s been a lot of discussion around how AI systems can support teachers and students with new types of assessment, such as more stealth assessment, or formative assessment. Formative assessment provides specific information to a teacher about students’ strengths and challenges to help a teacher adapt their instruction to meet students’ needs. Students’ needs vary and they can also be challenging to anticipate—this is where AI systems can support teachers and readers. We won’t get into assessment and feedback more here, but check out this Educator CIRCLS webinar on Assessment or read this post on AI and Formative Assessment to learn more about this big idea.

Looking ahead

These big ideas from the learning sciences should be incorporated into AI systems to create better, user-centered products. In addition, educators need to be involved in the process because they have valuable insights about what is working and not working in ways that complement researchers’ expertise. Merlyn Mind Practitioner Advisory Board member, Grace Magley reminds us that teachers “have to see real benefits, not just new tech” and “they need to be shown how it would work in a classroom full of diverse learners. They need to see benefits for the teacher as well as the learners.”

1Sawyer, R. (Ed.). (2014). The Cambridge Handbook of the Learning Sciences (2nd ed., Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139519526

I’m a Teacher, Will Artificial Intelligence Help Me?

Robot caricature in a yellow circle thinks of 0's and 1's, a teacher in a red heart thinks of people

by Judi Fusco and Pati Ruiz

Artificial Intelligence (AI) systems are becoming more prevalent everywhere including education. Educators often seem to wonder, “What is it?” and, “What can it do?” Let’s address these questions and then discuss why and how YOU should be involved!

What is it and what can it do for teachers?

Artificial intelligence (AI) is a field of computer science that lets machines make decisions and predictions. The goal of AI is to create machines that can mimic human capabilities. To do this, AI systems use many different techniques. You are probably using AI systems every day because they are embedded in our mobile phones and cars and include things like face recognition to unlock your phone, digital voice assistants, and mapping/route recommendations. We’re not going to go into the details of how AI works in this post, but you can read a prior post on AI and check out this glossary of AI terms that might be helpful if you want more background on the topic. In this post, we will focus on examples of AI systems that can help teachers.

Teachers have to do countless tasks, such as lesson planning, teaching, grading mentoring, classroom management, keeping up with technology in the classroom and new pedagogical practices, monitoring progress, and administrative work, all while keeping students’ social and emotional needs in mind. While AI has come a long way since the 1950s when the term was coined and work on Intelligent Tutoring Systems began, it cannot replace a teacher in the classroom. We will share examples of how existing AI systems have successfully helped teachers and reduced their load.

Example: Personalized online math learning software for middle and high school students

Mathia provides coaching to students as they solve math problems and gives teachers a detailed picture of where each student is, as well as suggestions for conversation starters to talk about each student’s understanding. This support allows teachers to spend more time with students focused on learning, while also directly giving the students additional, useful feedback as they solve math problems.

Example: A platform that provides immediate feedback to students and assessment data to teachers

Another AI system that supports both teachers and students is ASSISTments. It is also currently focused on math. For students, it gives assistance in the form of hints and instant feedback while they do math homework. For teachers, it gives information about which homework problems were difficult and what the most common wrong answers were. This can prompt teachers to spend time discussing the problems that students need the most help on, and teachers can be sure to re-teach concepts based on common wrong answers.

In addition to teaching content, when you think about all the things a teacher does in managing their classroom and all the “plates” they must juggle to keep 25, 30, or more students on task, engaged, and learning, you can imagine they could use some support. These next three systems described primarily support teachers.

Example: A digital assistant for teachers

One AI system that helps with classroom management tasks is a multimodal digital assistant specifically developed for teachers with privacy in mind, called Merlyn. Merlyn looks like a small speaker, but does so much more. It allows teachers to use voice and a remote control to control content from a distance. For example, with Merlyn teachers can set timers and switch displays between their laptop, document camera, and interactive whiteboard. Teachers can control a web browser on their laptop and do things like share a presentation, go to a specific point in a video, show a website, or search. This frees them up to walk around the classroom and interact with students more easily.

Other ways AI systems can support teaching and learning

The examples above show three categories of how AI systems have helped teachers and their students. Three more examples include, an AI system that can analyze the conversation from a classroom session and identify the amount that a teacher talked versus a student (i.e. TeachFX). This tool also identifies whether teachers let students build on each other’s thoughts leading to discussions. With the help of this AI system, teachers can work to engage their students in discussions and reflect on their practice.

Grading is another task that is very important but very time consuming. Gradescope, for example, supports instructors in grading their existing paper-based and digital assignments in less time than it normally takes them. It does this by scanning text and sorting similar responses together for the teacher to grade some of each type, the system then “learns” from the teacher, automatically grades the rest, and sends the grading to the teacher for review.

Finally, AI systems that are specialized within a subject matter can allow teachers to set up content-specific learning experiences. For example in the domain of science, Inq-ITS, allows teachers to select digital labs for their middle school students. When completing the assigned digital labs, students learn by doing. Inq-ITS autoscores the labs in real-time and shows the teacher performance updates for each student. A teacher can use the reports to provide the appropriate support to students who need additional help. Inq-ITS also supports students with hints while performing the labs.

Educators Must be Involved in the Design of AI Systems

The AI systems described above, support or augment, but never replace a teacher. We believe that AI systems can help by doing things that machines are good at while having teachers do the things that humans do best.

The AI systems above are also designed by teams that have made education and learning environments the main audience for their systems. They have also included teachers in their design process. There are other AI tools that exist and even more that are being developed to support teachers and students on other activities and tasks, but some don’t have the same focus on education. We think that it’s important that in the design of AI systems for classrooms, educators – the end-users – need to be involved in the design.

Some of the teams that design AI systems for education haven’t been in a classroom recently and when they were they probably weren’t the teacher. To make a technology that works in classrooms requires classroom experts (the main users) to be part of the design process and not an afterthought. When teachers give feedback, they help ensure 1) that systems work in ways that make sense for classrooms in general, and 2) that systems would work well in their specific classroom situations. (We’ll discuss why this is the case in another future blog post.)

A final, yet very important reason for educators to be involved, is that while AI systems can bring opportunities to support teaching and learning, there are also privacy, ethics, equity, and bias issues to be aware of. We don’t want to add anything to your already full plate, but as technologies come into your classroom, you should ask questions about how the system supports students, if the systems were designed for students like your students, what the privacy policies are, and any implications that might affect your students.

We understand that most teachers don’t have a single extra minute but it is crucial to have current teachers in the design process. If you want to learn and think about AI systems, as they become more prevalent, you will become an even more invaluable teacher or technology leader in your school/district. Your voice is important and getting more educators involved makes a more powerful collective voice.

Looking ahead

If you’re still reading this blog, you probably have an interest in AI systems; below we suggest a few places to connect. Teachers are critical to the design of effective AI technologies for schools and classrooms. We hope this post has given you some insights into how AI systems might support you and your students. If you are interested in getting involved, we have some links for you below. Consider this blog post an invitation to you to connect with us and join the conversation; we hope you’ll join us in thinking about the future of AI in Education.

In our next post we will discuss how AI systems informed by learning science principles may help solve problems in learning environments.

Let us know your thoughts @educatorCIRCLS.

Ways to join:
Educator CIRCLS
AI CIRCLS
Join the ASSISTments Teacher Community
Leadership Programs — TeachFX

Book Review: You Look Like a Thing and I Love You

This post was originally published on CIRCLEducators.org in October, 2020.

by Judi Fusco

Box with eyes, arms and feet holds a martini glass and an outline of a heart. Text reads: And I Love You

During CIRCL Educators’ Summer of Artificial Intelligence (AI), I read the book You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place1, by Dr. Janelle Shane. I got the recommendation for it from fellow CIRCL Educator, Angie Kalthoff.

I found the book helpful even though it is not about AI in education. I read and enjoyed the e-book and the audio version. As I started writing this review, I was driving somewhere with one of my teenagers and I asked if we could listen to the book. She rolled her eyes but was soon laughing out loud as we listened. I think that’s a great testament to how accessible the book is.

Teaching an AI

Many of us use AI products like Siri or Alexa, on a regular basis. But how did they get “smart?” In the book, Dr. Shane writes about the process of training machine learning2, systems to be “intelligent”. She tells us how they certainly don’t start smart. Reading about the foibles, flailings, and failings that she has witnessed in her work helped me understand why it is so important to get the training part right and helped me understand some of what needs to be considered as new products are developed.

Dr. Shane starts out comparing machine learning and rule-based AI systems, which are two very different types of AI systems. Briefly, a rule-based system uses rules written by human programmers as it works with data to make decisions. By contrast, a machine learning algorithm3 is not given rules. Instead, humans pick an algorithm, give a goal (maybe to make a prediction or decision), give example data that helps the algorithm learn4, and then the algorithm has to figure out how to achieve that goal. Depending on the algorithm, they will discover their own rules (for some this means adjusting weights on connections between what is input and what they output). From the example data given to the algorithm, it “learns” or rather the algorithm improves what it produces through its experience with that data. It’s important to note that the algorithm is doing the work to improve and not a human programmer. In the book, Dr. Shane explains that after she sets up the algorithm with a goal and gives it training data she goes to get coffee and lets it work.

Strengths and Weaknesses

There are strengths and weaknesses in the machine learning approach. A strength is that as the algorithm tries to reach its goal, it can detect relationships and features of details that the programmer may not have thought would be important, or that the programmer may not even have been aware of. This can either be good or bad.

One way it can be good or positive is that sometimes an AI tries a novel solution because it isn’t bogged down with knowledge constraints of rules in the world. However, not knowing about constraints in the world can simultaneously be bad and lead to impossible ideas. For example, in the book, Dr. Shane discusses how in simulated worlds, an AI will try things that won’t work in our world because it doesn’t understand the laws of physics. To help the AI, a human programmer needs to specify what is impossible or not. Also, an AI will take shortcuts that may lead to the goal, but may not be fair. One time, an AI created a solution that took advantage of a situation. While it was playing a game, the AI discovered there wasn’t enough RAM in the computer of its opponent for a specific move. The AI would make that move and cause the other computer to run out of RAM and then crash. The AI would then win every time. Dr. Shane discusses many other instances where an AI exploits a weakness to look like it’s smart.

In addition, one other problem we have learned from machine learning work, is that it highlights and exacerbates problems that it learns from training data. For example, much training data comes from the internet. Much of the data on the internet is full of bias. When biased data are used to train an AI, the biases and problems in the data become what guide the AI toward its goal. Because of this, our biases, found on the internet, become perpetuated in the decisions the machine learning algorithms make. (Read about some of the unfair and biased decisions that have occurred when AI was used to make decisions about defendants in the justice system.)

Bias

People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)

In the book, Dr. Shane says, machine learning occurs in the AI algorithms by “copying humans” — the algorithms don’t find the “best solution” or an unbiased one, they are seeking a way to do “what the humans would have done” (p 24) in the past because of the data they use for training. What do you think would happen if an AI were screening job candidates based on how companies typically hired in the past? (Spoiler alert: hiring practices do not become less discriminatory and the algorithms perpetuate and extend biased hiring.)

A related problem comes about because machine learning AIs make their own rules. These rules are not explicitly stated in some machine learning algorithms so we (humans aka the creators and the users) don’t always know what an AI is doing. There are calls for machine learning to write out the rules it creates so that humans can understand them, but this is a very hard problem and it won’t be easy to fix. (In addition, some algorithms are proprietary and companies won’t let us know what is happening.)

Integrating AIs into our lives

It feels necessary to know how a machine is making decisions when it is tasked with making decisions about people’s lives (e.g., prison release, hiring, and job performance). We should not blindly trust how AIs make decisions. AIs have no idea of the consequences of its decisions. We can still use them to help us with our work, but we should be very cautious about the types of problems we automate. We also need to ensure that the AI makes it clear what they are doing so that humans can review the automation, how humans can override decisions, and the consequences of an incorrect decision by an AI. Dr. Shane reminds us that an “AI can’t be bribed but it also can’t raise moral objections to anything it’s asked to do” (p. 4).

In addition, we need to ensure the data we use for training are as representative as possible to avoid bias, make sure that the system can’t take shortcuts to meet its goal, and we need to make sure the systems work with a lot of different types of populations (e.g., gender, racial, people with learning differences). AIso, an AI is not as smart as a human, in fact, Dr. Shane shares that most AI systems using machine learning (in 2019) have the approximate brainpower of a worm. Machine learning can help us automate tasks, but we still have a lot of work to do to ensure that AIs don’t harm or damage people.

What are your thoughts or questions on machine learning or other types of AI in education? Tweet to @CIRCLEducators and be part of the conversation.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

See a recent TED Talk by author Janelle Shane.


Notes:

  1. Read the book to find out what the title means!
  2. Machine learning is one of several AI approaches.
  3. Machine Learning is a general term that also includes neural networks and the more specialized neural network class of Deep Learning. Note also, a famous class of ML algorithms that use rules are decision-tree algorithms.
  4. Some algorithms “learn” with labeled examples and some without, but that’s a discussion beyond the scope of this post.

CIRCLS meet NEXUS! Exploring Learning Analytics, AIED, and Remaking Broadening

Paittbrushes with different colors of paint
Image by RhondaK on Unsplash

By Judi Fusco

Let’s think about our research of the future so we can be more inclusive. Who do we involve, where do we do it, what do we research, why do we do this research, and how do we do it?

A nexus is a collection of interconnected ideas, the NEXUS blog from the Society for Learning Analytics Research (SoLAR) shares practical developments in Learning Analytics with a broad audience. In this first post in a three-part shared series between NEXUS and the Center for Integrative Research in Computing and Learning Sciences or CIRCLS, we will consider ways the two communities can synergize. This post is cross posted at both SoLar Nexus on Medium and SoLAR Research.

Who We Are and What We Do

CIRCLS is a National Science Foundation funded hub that works to bring together a community of researchers, practitioners in K12 and informal settings, higher education faculty, technology developers, and graduate students, who are researching, developing, or working with emerging technologies for teaching and learning. Learning Analytics has played and continues to play a prominent and important role in projects with these future oriented technologies.

CIRCLS works to bring researchers together to address common needs, plan for the future, and create broader impact as they work to support personal-, community-, and context-centered needs with emerging technologies for learning. In our work, advancing equity is a core goal. In addition, CIRCLS helps to amplify work on emerging technologies for teaching and learning to policymakers, practitioners, and interested stakeholders. We’ve included a link to a 3-minute video to share more about how CIRCLS works to Build Community to Shape Emerging Technologies.

History and Mission

CIRCLS is a new center with a long history that is grounded with the work done in the Center for Innovative Research on Cyberlearning, which started in 2013. CIRCL supported the Cyberlearning program area at NSF, which preceded the Research on Emerging Technologies for Teaching and Learning (RETTL) program. CIRCLS is the new center supporting the RETTL program. First CIRCL, and now CIRCLS, shares reports about work that influenced the field. The projects involve ambitious designs for more equitable learning experiences with emerging technology.

Another characteristic of the projects is how they work to develop learning theories and technologies that are likely to become important to the field within 5-10 years. We see how CIRCLS projects often overlap with the research being done in SOLAR and that there are many members who are active in both communities. Artificial Intelligence (AI) is a growing area and Learning Analytics is integral when thinking about AI and how it could augment learning.

In our work to bridge the gap between research and classroom practice, we involve practitioners through Educator CIRCLS and also think about policy needs for emerging technologies, specifically AI. We also have special programming for graduate students and new scholars in our Emerging Scholars expertise exchange. If interested in any of these communities, SoLAR members are welcome to visit our website, read more, and join.

Remaking “Broadening”

When you hear the term “broadening,” (or for our non-US readers from SoLAR, perhaps “inclusion” is more familiar) what do you think? Our upcoming virtual convening will focus on building an understanding of how the field can make stronger progress by examining and remaking “broad.” How can this familiar word become more meaningful and specific, leading to greater intellectual merit and impacts? How can broadening address who we involve in research, where we conduct research, what we research, why we do the research, and how we do research?

The theme “Remake Broadening” challenges our community to learn and engage in innovative ways to make positive changes in education.Through roundtable discussions, breakout sessions, plenaries, and more, we will reflect and consider next steps and recommendations for the field. Our keynote speakers, Nicki Washington and Craig Watkins will offer insights as to how we can further our work to nurture inclusive communities and how we must move forward to “broaden” multiple aspects of the field. Some of the topics the community will explore include:

  • Learning Analytics and Data Visualization: Broadening our View
  • AI and Education Policy
  • Equity and Ethics Considerations for Teaching and Learning with AI
  • Using the Learning Sciences and Computational Approaches to develop Assessments and Intelligent Tutoring Systems
  • Towards Equity, Accessibility, & Inclusion

Cynthia D’Angelo, Chad Dorsey, and Tiffany Barnes, the facilitators for the Learning Analytics and Data Visualization: Broadening our View session, describe what they will discuss in their session:

For example, as we think about learning analytics and broadening our view, the production and application of learning analytics has extensive implications for education choices both now and in the future. While learning analytics can be used in many ways to improve learning, we must be aware of the more subtle consequences of its use in order to ensure that they are oriented toward the most positive ends possible. We make many choices and assumptions (sometimes implicitly) at many stages of research, including question selection, data collection, data processing, analysis, and presentation. These choices frequently have equity implications, often in subtle ways.

If this sounds interesting, we hope you’ll register for CIRCLS’21 and join us in thinking about these and other issues.

In our second post, we’ll share more about the sessions focused on bringing learning sciences and computer sciences together as we think about broadening in research on emerging technologies for teaching and learning.

Educator Spotlight: Marni Landry

Headshot of woman with short hair, glasses, earings wearing a white shirt with a pointy collar and a blazer.Who is Marni Landry?

Some of us at Educator CIRCLS recently had the pleasure of talking to educator Marni Landry. Marni has been the K-12 STEM Outreach Manager at Grand Canyon University for about three and half years where she spends her time coordinating STEM professional development for teachers and amazing summer camps like GenCyber cybersecurity, with partner Cori Araza for students and teachers. Before coming to GCU, Marni taught high school science for 16 years. She wrote the STEM Integrated curriculum for, and taught in the Center for Research, Engineering, Science, and Technology program on the Paradise Valley High School campus.

Outside the classroom, Marni has been a leader in the teaching community. She served on the Paradise Valley Technology Committee, designing and delivering technology PD to staff and delivering biotechnology PD as a BioRad fellow. She has also presented STEM PD for the National and Arizona Science Teachers Association (ASTA) and has served as their committee chair. In addition, she partners with MESA (Math Engineering Science Achievement), HOSA-Future Health Professionals, and the Society of Women Engineers (SWE).

Marni’s passion for teaching and learning was evident throughout our conversation, so it was no surprise to learn that her passion and impact have been widely recognized by various organizations. Marni is a recipient of the Presidential Award for Excellence in Science and Math Teaching, a Nobel Top 10 Teacher of the Year, AZ High School Science Teacher of the Year, Arizona Tech Council Teacher of the Year, IEEE Pre-College Teacher of the Year, Arizona Bioindustry Association Educator of the Year, and a Fellow of the Fulbright Teachers for Global Classrooms.

What’s one thing you really care about getting right as an educator?
Even though she’s been out of the classroom for a few years, Marni definitely maintains the heart of a teacher and still works through that lens. When asked what she really cares about getting right as an educator, she said, “Getting people to love learning and getting people who say ‘I can’t’ to say ‘I will.’ Learning is not a task. It’s an adventure! I want them to say, ‘Yeah, this is hard, but that’s the fun part!’”

What are you most proud of in your career?
When asked what she’s most proud of in her career, Marni first pointed to her students’ successes. She said that she has been fortunate to build relationships with so many students and to still be part of many of their lives. “Seeing their success is what I’m most proud of. Other people may not always have seen what I saw in them, but I fought tooth and nail for them. So to see them succeed is what makes me most proud.”

Marni also pointed to a proud personal moment–winning the Presidential Award for Excellence in Math and Science Teaching. She took away something profound from that experience in addition to the recognition. “I didn’t think I was PAEMST material, but my mentor was convinced that I was. I didn’t even think I could go through the application process, but my mentor said I could do it and that she would help.” Marni realized her mentor’s investment in her had a trickle down effect on her students. “They might not think they’re the right material, but I believe they are. They might not think they can accomplish certain things, but I think they can and I can help.”

What are some of your favorite educational technologies?
You can tell Marni frequently uses tech tools because she had several favorites in her back pocket. Here are a few she mentioned:

You can check out Marni’s Tech Tools Wakelet and GCU’s “Educator Tip of the Day” YouTube channel for more tech tools, tech tips, and general professional development, too!

What is your ideal vision for how the learning sciences and/or educational technologies could shape teaching and learning in the future?
Marni had some great thoughts surrounding the ideal partnership between the learning sciences, technologies, and education. She pointed out that teachers have to overcome several obstacles before they can meaningfully incorporate technology and research into their teaching practices. For one, she said teachers don’t have time to try out several new technologies and get comfortable with them. “Before teachers can use technologies wisely, they have to have time and permission to use them messily. With the demands teachers face, there’s no chance for trying; there’s no chance for messy.” She also talked about the challenges of using educational research to create standardized policies. “The perfect research-based method, strategy, tool, etc. isn’t going to work for everybody. In an ideal world, educators would be valued and given the freedom to motivate their students in the way their students need to be motivated–and that might look different from classroom to classroom. We need the system to come to terms with that.”

Takeaway
We have several great takeaways from our conversation with Marni. One thing we appreciate most is that she highly values the quantitative aspects of the STEM fields she champions, and she equally values the qualitative aspects of being human and of teaching as a human endeavor. While she respects data, her students are more than numbers to her. They are names and faces and personalities and individuals. As the 2021-2022 school year starts, I hope we’re all inspired to be an educator like that.

Contact Info
You can connect with Marni via email at marni.landry@gcu.edu, through GCU’s Outreach program at CayonPD.com, or on social media @marni_landry