Category Archives: AI in Education

Apprentice Learner: Artificial Intelligence (AI) in the Classroom

by Sarah Hampton

One of my favorite things about CIRCLS is the opportunity to collaborate with education researchers and technology developers. Our goal as a community is to innovate education using technology and the learning sciences to give more learners engaging educational experiences to help them gain deep understanding. To reach that goal, we need expertise from many areas: researchers who study how we learn best, teachers who understand how new technologies can be integrated, and developers who turn ideas into hardware or software.

Recently I’ve been reminded of an opportunity when Judi, Pati, and I meet with Daniel Weitekemp in June of 2020. Daniel, a PhD student at Carnegie Mellon University at the time, was developing an AI tool for teachers called Apprentice Learner.

A stacked addition problem demonstrating carrying. "839 + 445 = 1284" Space below the problem displays "hint," "done," "previous," and "next" buttons.
Figure 1. Apprentice Learner Interface that students use when interacting with Apprentice Learner. The user can request a hint or type in their answer and then hit done.

Apprentice Learner looks a bit like a calculator at first glance, so an onlooker might be tempted to say, “What’s so cutting edge about this? We’ve been able to do basic math on calculators for years.” But we need to understand the difference between traditional software and software using artificial intelligence (AI) to appreciate the new benefits this kind of tool can bring to the education table.

In a basic calculator, there’s an unchanging program that tells the screen to display “1284” when you type in “839+445.” There’s no explanation given for how and why the programming behind a calculator works. Yet, for each math problem someone could type in a calculator, there is an answer that has been explicitly programmed to be displayed on the screen.

Contrast a calculator to Apprentice Learner, which uses machine learning (a type of artificial intelligence). No one tells Apprentice Learner to display “1284” when it sees 839+445.” Instead, it has some basic explicit instructions and is given lots of examples of correctly solved problems adding 2 or more columns of numbers. Then it has to figure out how to answer new questions. The examples it is given are called training data. In this case, Apprentice Learner was given explicit instructions about adding single digit numbers and then lots of training data–multidigit addition problems with their answers–maybe problems like “21+43=64,” “49+8=57,” and “234+1767=2001.” Then, it starts guessing at ways to arrive at the answers given from the training data.

The first guess might be to stack the numbers and add each column from left to right. That works perfectly for “21+43,” but gives an incorrect answer of “129” for “49+8.”

Two guesses. First: “21 correctly stacked over 43 which equals 64. The next stacked equation gives an incorrect answer of “129” for “49+8” stack because the computer calculates 4+8 in the first column and brings the 9 in the second column down. Second: “21 correctly stacked over 43 which equals 64. The next stacked equation gives an incorrect answer of “417” for “49+8” stack because the computer calculates 4 for column 1 and 17 for column 2 and puts them together.

The second guess might be to stack the numbers and add each column from right to left. Again, that works perfectly for “21+43.” Unfortunately, that would give an answer of “417” for “49+8.”

The software continues finding patterns and trying out models until it finds one that fits the training data best. You can see below that, eventually, Apprentice Learner “figured out” how to regroup (aka carry) so it could arrive at the correct answer.

A stacked addition problem demonstrating carrying. "839 + 445 = 1284" Space below the problem displays "hint," "done," "previous," and "next" buttons.

So what are the implications for something like this in education? Here are a few of my thoughts:

Apprentice Learner models inductive learning which can help pre-service teachers.

Induction is the process of establishing a general law by observing multiple specific examples. It’s the basic principle machine learning uses. In addition, inductive reasoning tasks such as identifying similarities and differences, pattern recognition, generalization, and hypothesis generation play important roles when learning mathematics. (See Haverty, Koedinger, Klahr, Alibali). Multiple studies have shown that greater learning occurs when students induce mathematical principles themselves first rather than having the principles directly explained at the onset. (See Zhu and Simon, Klauer, and Koedinger and Anderson)

However, instructional strategies that promote students to reason inductively prior to direct instruction can be difficult for math teachers to implement if they haven’t experienced learning math this way themselves. Based on conversations with multiple math teacher colleagues throughout the years, most of us learned math in more direct manners i.e., the teacher shows and explains the procedure first and then the learner imitates it with practice problems. (Note: even this language communicates that there is “one right way” to do math unlike induction in which all procedures are evaluated for usefulness. This could be a post in its own right.)

Apprentice Learner could provide a low-stakes experience to encourage early-career teachers to think through math solutions inductively. Helping teachers recognize and honor multiple student pathways to a solution empowers students, helps foster critical thinking, and increases long-term retention. (See Atta, Ayaz, and Nawaz and Pokharel) This could also help teachers preempt student misconceptions (like column misalignment caused by a misunderstanding of place values and digits) and be ready with counterexamples to show why those misconceptions won’t work for every instance, much like I demonstrated above with Apprentice Learner’s possible first and second guess at how multi-digit addition works. Ken Koedinger, professor of human-computer interaction and psychology at CMU put it like this, “The machine learning system often stumbles in the same places that students do. As you’re teaching the computer, we can imagine a teacher may get new insights about what’s hard to learn because the machine has trouble learning it.”

The right training data is crucial.
What would have happened if there were only some types of problems in the training data? What if they were all two digit numbers? Then it wouldn’t have mattered if you stacked them left to right or right to left. What if none required regrouping/carrying? Then adding right to left is a perfectly acceptable way to add in every instance. But when all the edge cases are included, the model is more accurate and robust.

Making sure the training data has enough data and a wide array of data to cover all the edge cases is crucial to the success of any AI model. Consider what has already happened when insufficient training data was used for facial recognition software. “A growing body of research exposes divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, Black, and 18-30 years old.” Some of the most historically excluded people were most at risk for negative consequences of the AI failing. What’s important for us as educators? We need to ask questions about things like training data before using AI tools, and do our best to protect all students from negative consequences of software.

Feedback is incredibly advantageous.

A flowchart demonstrating different ways a user can give direct input to Apprentice Learner such as demonstrating the next step, specifying if the highlighted input is correct, etc.
Figure 2. Diagram of how it works to give feedback to the Apprentice Learner system.

One of the most interesting things about Apprentice Learner is how it incorporates human feedback while it develops models. Instead of letting the AI run its course after the initial programming, it’s designed for human interaction throughout the process. The developers’ novel approach allows Apprentice Learner to be up and running in about a fourth of the time compared to similar systems. That’s a significant difference! (You can read about their approach in the Association for Computing Machinery’s Digital Library.)

It’s no surprise that feedback helps the system learn, in fact, there’s a parallel between helping the software learn and helping students learn. Feedback is one of the most effective instructional strategies in our teacher toolkit. As I highlighted in a former post, feedback had an average effect size of 0.79 standard deviation – an effect greater than students’ prior cognitive ability, socioeconomic background, and reduced class size on students’ performance. I’ve seen firsthand how quickly students can learn when they’re given clear individualized feedback exactly when they need it. I wasn’t surprised to see that human intervention could do the same for the software.

I really enjoyed our conversation with Daniel. It was interesting to hear our different perspectives around the same tool. (Judi is a research scientist, Pati is a former teacher and current research scientist, Daniel is a developer, and I am a classroom teacher.) I could see how this type of collaboration during the research and development of tools could amplify their impacts in classrooms. We always want to hear from more classroom teachers! Tweet @EducatorCIRCLS and be part of the conversation.

Thank you for your time in talking and reviewing this post, Daniel Weitekamp, PhD Candidate, Carnegie Mellon University.

Learn More about Apprentice Learner:

Learn More about Math Teaching and Learning:

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

How Can AI Systems Support Teachers: 5 Big Ideas from the Learning Sciences

This post was originally published on the Digital Promise website.

By Pati Ruiz and Judi Fusco

The learning sciences study the design and implementation of effective learning environments by drawing on a variety of perspectives across a range of physical, social, and technological spaces1. Learning sciences focuses on human learning and helps individuals achieve their fullest potential and attain 21st-century skills. Because of this focus, the learning sciences should be foundational in the design and development of emerging technologies for teaching and learning. AI systems are an emerging technology that are starting to play a significant role in the redesign of learning environments. To increase our chances of creating successful AI systems for learning, they should be grounded in the learning sciences. We’ll discuss five big ideas from the learning sciences in relation to the design of AI systems: Representation and Supports; Collaboration; How Students Think; Building on Students’ Cultural and Linguistic Assets; Assessment and Feedback. We propose that these big ideas are important as a starting point in the design of better AI systems.

Big Idea 1: Representation and Supports

The learning sciences have found that enabling students to make connections across multiple representations (for example, graphs, writing, images, maps, blocks, etc.) contributes to knowledge construction. Different forms of representation give them a way to make sense of concepts in the best way that helps them construct their knowledge. How can this big idea be used in designing an AI system?

In a classroom where a teacher is fielding questions from students about a specific species of frog, an AI system can support the teacher by quickly searching for and projecting a set of visual representations of the frog that are appropriate for the students and have high-quality information for the teacher and students. When teaching about a metabolic function, an animation might help, and the AI system could share the animation and also point to text or other representations that may help students make connections to understand the process. By giving students and teachers just-in-time support like a relevant animation or engaging follow-up questions, AI systems can support teachers to orchestrate learning experiences by automating tasks (as described above) so teachers can spend more time focused on students. Beyond those types of just-in-time supports, AI systems can further support the engagement of all students in sustained creative work—something that has been a challenging problem in the design of learning environments.

Big Idea 2: Collaboration

The learning sciences have found that learning environments should be designed to foster collaboration and help learners work together to share and make sense of challenging problems. Research points us toward more social and collaborative learning environments.

AI systems could support this big idea by making recommendations for how teachers group students or by giving students themselves prompts that may lead to shared understanding when working in groups without the teacher. Emerging AI technologies might help teachers ask different groups the right guiding questions as the AI system “listens” to what each group is discussing. An AI system that asks questions might also be able to assess the answers to those questions, help students arrive at the same conceptual understanding, and determine when the group is ready for a new task.

Big Idea 3: How Students Think

The learning sciences have found that learning environments should be not only collaborative, but also foster adaptive or personalized learning because there is not a single way to learn and individuals have unique needs when it comes to learning environment designs.

AI systems might support teachers in facilitating this big idea by finding instances of student reasoning for teachers to review based on the analysis of video, audio, or student work. AI systems can also quickly provide insights to teachers about what learning path a student is taking and analytics could help teachers understand how each of their students tends to learn a concept based on their writing, speaking, or movements. A teacher might take the feedback given by an AI system and follow-up with students about their individual learning process and make decisions with them about what to do next. By helping students keep track of how they are practicing and providing scaffolds when they are needed and removing them when a student is ready, an AI system can support students’ unique learning needs.

Big Idea 4: Building on Students’ Cultural and Linguistic Assets

The learning sciences have found that learning and teaching are cultural processes and that we best support learning when we include students’ cultural and linguistic backgrounds as pedagogical assets. This big idea means that AI systems need to support learning environments that enable teachers and learners to address the multiple elements of learning, including identity and culture. To do this, developers need to restructure the assumptions that are made about learners and what they know by keeping both teachers and learners in the loop. For example, AI systems can help personalize the materials for Spanish-speaking students and their parents by translating sections of text, or by providing just-in-time translations so that they can more fully participate in learning experiences. Another personalization could be an AI system where the agent speaks to students and engages them using speech patterns similar to those of the student.

Big Idea 5: Assessment and Feedback

There’s been a lot of discussion around how AI systems can support teachers and students with new types of assessment, such as more stealth assessment, or formative assessment. Formative assessment provides specific information to a teacher about students’ strengths and challenges to help a teacher adapt their instruction to meet students’ needs. Students’ needs vary and they can also be challenging to anticipate—this is where AI systems can support teachers and readers. We won’t get into assessment and feedback more here, but check out this Educator CIRCLS webinar on Assessment or read this post on AI and Formative Assessment to learn more about this big idea.

Looking ahead

These big ideas from the learning sciences should be incorporated into AI systems to create better, user-centered products. In addition, educators need to be involved in the process because they have valuable insights about what is working and not working in ways that complement researchers’ expertise. Merlyn Mind Practitioner Advisory Board member, Grace Magley reminds us that teachers “have to see real benefits, not just new tech” and “they need to be shown how it would work in a classroom full of diverse learners. They need to see benefits for the teacher as well as the learners.”

This blog post is an invitation to you to connect with us and join the conversation on the future of AI in Education. If you are interested in getting involved, please visit:

Educator CIRCLS
AI CIRCLS
1Sawyer, R. (Ed.). (2014). The Cambridge Handbook of the Learning Sciences (2nd ed., Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139519526

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

I’m a Teacher, Will Artificial Intelligence Help Me?

Robot caricature in a yellow circle thinks of 0's and 1's, a teacher in a red heart thinks of people
by Judi Fusco and Pati Ruiz

Artificial Intelligence (AI) systems are becoming more prevalent everywhere including education. Educators often seem to wonder, “What is it?” and, “What can it do?” Let’s address these questions and then discuss why and how YOU should be involved!

What is it and what can it do for teachers?

Artificial intelligence (AI) is a field of computer science that lets machines make decisions and predictions. The goal of AI is to create machines that can mimic human capabilities. To do this, AI systems use many different techniques. You are probably using AI systems every day because they are embedded in our mobile phones and cars and include things like face recognition to unlock your phone, digital voice assistants, and mapping/route recommendations. We’re not going to go into the details of how AI works in this post, but you can read a prior post on AI and check out this glossary of AI terms that might be helpful if you want more background on the topic. In this post, we will focus on examples of AI systems that can help teachers.

Teachers have to do countless tasks, such as lesson planning, teaching, grading mentoring, classroom management, keeping up with technology in the classroom and new pedagogical practices, monitoring progress, and administrative work, all while keeping students’ social and emotional needs in mind. While AI has come a long way since the 1950s when the term was coined and work on Intelligent Tutoring Systems began, it cannot replace a teacher in the classroom. We will share examples of how existing AI systems have successfully helped teachers and reduced their load.

Example: Personalized online math learning software for middle and high school students

Mathia provides coaching to students as they solve math problems and gives teachers a detailed picture of where each student is, as well as suggestions for conversation starters to talk about each student’s understanding. This support allows teachers to spend more time with students focused on learning, while also directly giving the students additional, useful feedback as they solve math problems.

Example: A platform that provides immediate feedback to students and assessment data to teachers

Another AI system that supports both teachers and students is ASSISTments. It is also currently focused on math. For students, it gives assistance in the form of hints and instant feedback while they do math homework. For teachers, it gives information about which homework problems were difficult and what the most common wrong answers were. This can prompt teachers to spend time discussing the problems that students need the most help on, and teachers can be sure to re-teach concepts based on common wrong answers.

In addition to teaching content, when you think about all the things a teacher does in managing their classroom and all the “plates” they must juggle to keep 25, 30, or more students on task, engaged, and learning, you can imagine they could use some support. These next three systems described primarily support teachers.

Example: A digital assistant for teachers

One AI system that helps with classroom management tasks is a multimodal digital assistant specifically developed for teachers with privacy in mind, called Merlyn. Merlyn looks like a small speaker, but does so much more. It allows teachers to use voice and a remote control to control content from a distance. For example, with Merlyn teachers can set timers and switch displays between their laptop, document camera, and interactive whiteboard. Teachers can control a web browser on their laptop and do things like share a presentation, go to a specific point in a video, show a website, or search. This frees them up to walk around the classroom and interact with students more easily.

Other ways AI systems can support teaching and learning

The examples above show three categories of how AI systems have helped teachers and their students. Three more examples include, an AI system that can analyze the conversation from a classroom session and identify the amount that a teacher talked versus a student (i.e. TeachFX). This tool also identifies whether teachers let students build on each other’s thoughts leading to discussions. With the help of this AI system, teachers can work to engage their students in discussions and reflect on their practice.

Grading is another task that is very important but very time consuming. Gradescope, for example, supports instructors in grading their existing paper-based and digital assignments in less time than it normally takes them. It does this by scanning text and sorting similar responses together for the teacher to grade some of each type, the system then “learns” from the teacher, automatically grades the rest, and sends the grading to the teacher for review.

Finally, AI systems that are specialized within a subject matter can allow teachers to set up content-specific learning experiences. For example in the domain of science, Inq-ITS, allows teachers to select digital labs for their middle school students. When completing the assigned digital labs, students learn by doing. Inq-ITS autoscores the labs in real-time and shows the teacher performance updates for each student. A teacher can use the reports to provide the appropriate support to students who need additional help. Inq-ITS also supports students with hints while performing the labs.

Educators Must be Involved in the Design of AI Systems

The AI systems described above, support or augment, but never replace a teacher. We believe that AI systems can help by doing things that machines are good at while having teachers do the things that humans do best.

The AI systems above are also designed by teams that have made education and learning environments the main audience for their systems. They have also included teachers in their design process. There are other AI tools that exist and even more that are being developed to support teachers and students on other activities and tasks, but some don’t have the same focus on education. We think that it’s important that in the design of AI systems for classrooms, educators – the end-users – need to be involved in the design.

Some of the teams that design AI systems for education haven’t been in a classroom recently and when they were they probably weren’t the teacher. To make a technology that works in classrooms requires classroom experts (the main users) to be part of the design process and not an afterthought. When teachers give feedback, they help ensure 1) that systems work in ways that make sense for classrooms in general, and 2) that systems would work well in their specific classroom situations. (We’ll discuss why this is the case in another future blog post.)

A final, yet very important reason for educators to be involved, is that while AI systems can bring opportunities to support teaching and learning, there are also privacy, ethics, equity, and bias issues to be aware of. We don’t want to add anything to your already full plate, but as technologies come into your classroom, you should ask questions about how the system supports students, if the systems were designed for students like your students, what the privacy policies are, and any implications that might affect your students.

We understand that most teachers don’t have a single extra minute but it is crucial to have current teachers in the design process. If you want to learn and think about AI systems, as they become more prevalent, you will become an even more invaluable teacher or technology leader in your school/district. Your voice is important and getting more educators involved makes a more powerful collective voice.

Looking ahead

If you’re still reading this blog, you probably have an interest in AI systems; below we suggest a few places to connect. Teachers are critical to the design of effective AI technologies for schools and classrooms. We hope this post has given you some insights into how AI systems might support you and your students. If you are interested in getting involved, we have some links for you below. Consider this blog post an invitation to you to connect with us and join the conversation; we hope you’ll join us in thinking about the future of AI in Education.

In our next post we will discuss how AI systems informed by learning science principles may help solve problems in learning environments.

Let us know your thoughts @educatorCIRCLS.

Ways to join:
Educator CIRCLS
AI CIRCLS
Join the ASSISTments Teacher Community
Leadership Programs — TeachFX

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Book Review: You Look Like a Thing and I Love You

This post was originally published on CIRCLEducators.org in October, 2020.

by Judi Fusco
Box with eyes, arms and feet holds a martini glass and an outline of a heart. Text reads: And I Love YouDuring CIRCL Educators’ Summer of Artificial Intelligence (AI), I read the book You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place1, by Dr. Janelle Shane. I got the recommendation for it from fellow CIRCL Educator, Angie Kalthoff.

I found the book helpful even though it is not about AI in education. I read and enjoyed the e-book and the audio version. As I started writing this review, I was driving somewhere with one of my teenagers and I asked if we could listen to the book. She rolled her eyes but was soon laughing out loud as we listened. I think that’s a great testament to how accessible the book is.

Teaching an AI

Many of us use AI products like Siri or Alexa, on a regular basis. But how did they get “smart?” In the book, Dr. Shane writes about the process of training machine learning2, systems to be “intelligent”. She tells us how they certainly don’t start smart. Reading about the foibles, flailings, and failings that she has witnessed in her work helped me understand why it is so important to get the training part right and helped me understand some of what needs to be considered as new products are developed.

Dr. Shane starts out comparing machine learning and rule-based AI systems, which are two very different types of AI systems. Briefly, a rule-based system uses rules written by human programmers as it works with data to make decisions. By contrast, a machine learning algorithm3 is not given rules. Instead, humans pick an algorithm, give a goal (maybe to make a prediction or decision), give example data that helps the algorithm learn4, and then the algorithm has to figure out how to achieve that goal. Depending on the algorithm, they will discover their own rules (for some this means adjusting weights on connections between what is input and what they output). From the example data given to the algorithm, it “learns” or rather the algorithm improves what it produces through its experience with that data. It’s important to note that the algorithm is doing the work to improve and not a human programmer. In the book, Dr. Shane explains that after she sets up the algorithm with a goal and gives it training data she goes to get coffee and lets it work.

Strengths and Weaknesses

There are strengths and weaknesses in the machine learning approach. A strength is that as the algorithm tries to reach its goal, it can detect relationships and features of details that the programmer may not have thought would be important, or that the programmer may not even have been aware of. This can either be good or bad.

One way it can be good or positive is that sometimes an AI tries a novel solution because it isn’t bogged down with knowledge constraints of rules in the world. However, not knowing about constraints in the world can simultaneously be bad and lead to impossible ideas. For example, in the book, Dr. Shane discusses how in simulated worlds, an AI will try things that won’t work in our world because it doesn’t understand the laws of physics. To help the AI, a human programmer needs to specify what is impossible or not. Also, an AI will take shortcuts that may lead to the goal, but may not be fair. One time, an AI created a solution that took advantage of a situation. While it was playing a game, the AI discovered there wasn’t enough RAM in the computer of its opponent for a specific move. The AI would make that move and cause the other computer to run out of RAM and then crash. The AI would then win every time. Dr. Shane discusses many other instances where an AI exploits a weakness to look like it’s smart.

In addition, one other problem we have learned from machine learning work, is that it highlights and exacerbates problems that it learns from training data. For example, much training data comes from the internet. Much of the data on the internet is full of bias. When biased data are used to train an AI, the biases and problems in the data become what guide the AI toward its goal. Because of this, our biases, found on the internet, become perpetuated in the decisions the machine learning algorithms make. (Read about some of the unfair and biased decisions that have occurred when AI was used to make decisions about defendants in the justice system.)

Bias

People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)

In the book, Dr. Shane says, machine learning occurs in the AI algorithms by “copying humans” — the algorithms don’t find the “best solution” or an unbiased one, they are seeking a way to do “what the humans would have done” (p 24) in the past because of the data they use for training. What do you think would happen if an AI were screening job candidates based on how companies typically hired in the past? (Spoiler alert: hiring practices do not become less discriminatory and the algorithms perpetuate and extend biased hiring.)

A related problem comes about because machine learning AIs make their own rules. These rules are not explicitly stated in some machine learning algorithms so we (humans aka the creators and the users) don’t always know what an AI is doing. There are calls for machine learning to write out the rules it creates so that humans can understand them, but this is a very hard problem and it won’t be easy to fix. (In addition, some algorithms are proprietary and companies won’t let us know what is happening.)

Integrating AIs into our lives

It feels necessary to know how a machine is making decisions when it is tasked with making decisions about people’s lives (e.g., prison release, hiring, and job performance). We should not blindly trust how AIs make decisions. AIs have no idea of the consequences of its decisions. We can still use them to help us with our work, but we should be very cautious about the types of problems we automate. We also need to ensure that the AI makes it clear what they are doing so that humans can review the automation, how humans can override decisions, and the consequences of an incorrect decision by an AI. Dr. Shane reminds us that an “AI can’t be bribed but it also can’t raise moral objections to anything it’s asked to do” (p. 4).

In addition, we need to ensure the data we use for training are as representative as possible to avoid bias, make sure that the system can’t take shortcuts to meet its goal, and we need to make sure the systems work with a lot of different types of populations (e.g., gender, racial, people with learning differences). AIso, an AI is not as smart as a human, in fact, Dr. Shane shares that most AI systems using machine learning (in 2019) have the approximate brainpower of a worm. Machine learning can help us automate tasks, but we still have a lot of work to do to ensure that AIs don’t harm or damage people.

What are your thoughts or questions on machine learning or other types of AI in education? Tweet to @CIRCLEducators and be part of the conversation.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

See a recent TED Talk by author Janelle Shane.


Notes:

  1. Read the book to find out what the title means!
  2. Machine learning is one of several AI approaches.
  3. Machine Learning is a general term that also includes neural networks and the more specialized neural network class of Deep Learning. Note also, a famous class of ML algorithms that use rules are decision-tree algorithms.
  4. Some algorithms “learn” with labeled examples and some without, but that’s a discussion beyond the scope of this post.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Reflections on the AI and Learning Environments Webinar: Things to Consider When Making Purchasing and/or Adoption Decisions for AI Tools

Eduators, Artificial Intelligence, and the future of LearningBy Sarah Hampton

On April 21, I was able to participate in something really exciting! I joined some amazing researchers and former teachers in the Educators, Artificial Intelligence, and the Future of Learning webinar on Learning Environments facilitated by James Lester. The webinar was designed to help practitioners, AI researchers, and developers share their perspectives on how artificial intelligence can be used in the classroom. As you may know, I am a middle and secondary math teacher. My fellow panelists included:

  • Diane W. Doersch, Technical Project Director, Digital Promise
  • Cindy Hmelo Silver, Learning and Technology Researcher, Indiana University
  • Kylie Peppler and Emily Schindler, Learning and Technology Researchers, University of California, Irvine

The webinar focused on how AI can enhance learning environments. It started with James who discussed the advancements in educational AIs during his 25 years of work in the field, the significant benefit they can provide, and the current demand for AI in educational settings. In other words, this is a hot topic in education right now!

Next, Diane Doersch shared her thoughts on AI in education, drawing from her experiences as a former classroom teacher, a Director of Technology for a large school district, and Chief Technology and Information Officer. She called for optimism yet caution and thoughtful vetting processes before incorporating AI in classrooms. She also stressed how important it is for school decision makers to know and understand what artificial intelligence is and the impacts that it has in order to properly vet products.

In this initial post, I want to camp out on Diane’s thoughts; we’ll discuss Cindy Hmelo-Silver and Kylie Peppler and Emily Schindler’s work in later posts. At Educator CIRCLS, we’ve really been digging into artificial intelligence so we can participate in the important conversation happening right now around how AI can be used in classrooms, and, perhaps more importantly, when AI should and shouldn’t be used in classrooms. We want to offer our educator perspectives to the communities developing, researching, and creating policy around AI in education. Furthermore, we want you to understand artificial intelligence so you can offer your unique perspectives and advocate for your students, too. Our friends at Digital Promise recently posted Artificial Intelligence 101: Covering the Basics for Educators. It’s a great introduction to AI and has points to ponder for veteran AI folks, too.

I’ve spent a lot of time reflecting since the webinar. I’ve specifically been thinking about things to consider when making purchasing and/or adoption decisions for AI products. Diane and I offered some suggestions during the webinar (timestamp 39:55), and I have added more below. You will notice some common themes from AI 101 and from this school procurement guide by Edtech Equity. I hope these can be useful resources for you and your school decision makers as you’re sure to see more and more AI products coming your way!

Is it safe? Is it secure? Is it ethical?

  • How is the company funded? Do they sell the data they collected? How is the data safeguarded?
  • What was the training data for the AI like? Was it sufficient in volume and diversity? Has it had adversarial training?
  • What was the fitness model like when training the AI? What was the goal and how was fitness measured?
  • What are the consequences if the AI fails? How does it fail?

Does it align with the mission of the district/school?

  • Does it promote the kind of district/school culture you want?
  • Does it create a significantly better learning experience that you couldn’t gain otherwise? Will it lead to substantial time saving or learning gains or meaningful learning experiences? Is it more than a wow factor?
  • Does it promote the kind of assessments and standards you want to grow toward, or does it increase performance on your current assessments and standards?

Is it classroom/teacher friendly?

  • Was it developed in collaboration with teachers? If not, it might work really well in the lab but may not extend to the complexity of a real classroom.
  • Has it been tested in a classroom context similar to your own?
  • Can the teacher override the AI if necessary?
  • Does the tool free up the teacher to do what the teacher does best? You don’t want to offload what humans do best onto a machine. You want to maximize what machines do best and what people do best.
  • Does the tool have a thoughtful approach to classroom management?
  • Does the tool have a simple but thoughtful teacher dashboard?
  • Will implementing the tool require teachers to change their pedagogy? If so, what supports, training, and time will be offered to make that shift successful?
  • Does it promote the kind of classroom culture/activities you want? For example, does it help with collaboration, critical thinking, engaging all students, etc.?

What do you think? Did I leave something out? Feel free to tweet us @EducatorCIRCLS with any comments or suggestions! Stay tuned for future posts unpacking important topics from the webinar and sign up for the CIRCLS newsletter to stay updated on emerging technologies for teaching and learning. I’ll leave you with a question Diane posed, “If AI is the solution, then what’s the problem we’re trying to solve?”

Related

We also have resources from the other webinars in this series and additional posts on AI.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Reflections on Coded Bias

Coded Bias film ad Watch with us

“Algorithmic justice––making sure there’s oversight in the age of automation––is one of the largest civil rights concerns we have.”Joy Buolamwini

On May 3rd, 2021 Educator CIRCLS hosted a watch party for the film Coded Bias which highlights the incredible work being done by organizations, data scientists, and activists on an international scale. The film challenged our unconscious biases and encouraged us to listen to one another as we consider the ways that we interact with artificial intelligence (AI) on a daily basis. To begin with, the film made very clear the wide societal impacts, both positive and negative, of AI as well as the fact that AI algorithms can perpetuate biases. Given this, we believe it is essential to become more knowledgeable about AI so that we, as educators, can make informed decisions about AI. As we watched this film we considered and discussed the ethical implications that need to be fully investigated before new AI tools are adopted in our classrooms. This film also helped us see that we also need to investigate the people designing the AI and helped us arrive at some important questions that we need to be asking about AI.

Here are some questions:

  • How was the AI system designed, for classroom use or other situations? At what point are teachers brought in to make decisions about their students?
  • What data was used when the system was trained?
    • What groups of people were included during the testing process?
  • What data will be collected by the system and what will happen to that data if the tool is sold? Will it only be used for only the purpose specified? Are there any potential dangers to the students? Are there any potential dangers to the teachers who use the systems with their students?
    • Can students be identified from this data?
    • Can teachers be identified from this data?
    • Can this data be used to evaluate teachers’ performance (something that may not be specified by the system)?
  • How does the system interact with students, and can I give feedback to the system or override the decisions?

Another very important question but a difficult one to answer is: When this AI tool fails, how does it fail, and what are the consequences? While EdTech designers might not be able to accurately answer this question, you might be able to use it to start a conversation about the pitfalls of this particular piece of technology. It will also challenge EdTech designers to think about these difficult questions and engage the design process to adjust their product if needed. After all, starting these conversations about the ethics of AI and where its faults lie is our duty.

Sign up for the CIRCLS newsletter to stay updated on emerging technologies for teaching and learning and let us know what you think by tweeting @EducatorCIRCLS

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Artificial Intelligence & Learning + Out of School Time Learning

by Merijke Coenraad

Tenth graders collaborate on an engineering project.
Tenth graders collaborate on an engineering project. Photo by Allison Shelley/The Verbatim Agency for EDUimages
Welcome back to our blog series on Ambitious Mashups! Today, we are going to focus on the use of Artificial Intelligence in Learning and mash it up with Learning in Out-of-school-time.

Artificial Intelligence and Learning. The most common technologies used within projects were (a) Intelligent tutoring systems, (b) machine learning, (c) speech, vision, and natural interactions, and (d) social robotics and avatars. While these were the most common technologies, many projects used a mashup of technologies and focused on how the technology could be used in new ways within the classroom to support learning. If you want to read more about the types of AI, we recommend the AI4K12 project and this poster thinking about 5 big ideas in AI as a starting point.

Need an example of what this looks like? Check out the Inq-Blotter project. It provides teachers with real-time tools that alert them to students’ science needs as they are learning. Students use the Inq-ITS platform that provides them inquiry science learning experiences through labs and simulations. In addition to providing the experiences, Inq-ITS is an intelligent tutoring system and is able to assess students in real-time. Inq-Blotter builds on these capabilities to send teachers messages relating to how students are doing so they are able to provide just-in-time support. Inq-Blotter provides teachers with the opportunity to gather formative assessment data and support quality inquiry learning. This ambitious project took multiple years of research and mashed up data science, assessment, science learning, and intelligent tutoring tools.

If you are interested in intelligent tutors that help you understand what your students know, you can also check out our post on ASSISTments, an intelligent tutor for math learning. You can also see our webinar that discusses both ASSISTments and Inq-ITS.

So, what does this mean for your classroom? Think about:
What technologies might lie ahead and how do you want to use them? What would an intelligent tutoring system or social robot look like in your classroom?
Are you interested in using an intelligent tutor like Inq-ITS or ASSISTments in your classroom now? What are the implications of using these technologies for your teaching and practices within the classroom?
How could these emerging technologies affect your classroom practices and pedagogy? How will you continue to promote equitable learning opportunities when using them?

Out-of-school-time Learning. While we discussed technology in a formal setting during the school day, Some of the projects also investigated learning with educational technology in out-of-school environments.

Mash-it! Let’s look at a project that ambitiously mashes using AI in out-of-school learning!

The Virtual STEM Buddies for Personalized Learning Experiences in Free Choice Informal Learning Settings project brings together museum learning with intelligent agent buddies to support students’ STEM learning at a children’s museum. The computerized character interacts with the child as they move through the museum and acts as both a meteor and a peer. The AI agent, aka the buddy, is able to give instructions based on teachable moments and help children to find exhibits that aren’t crowded. AI in this out-of-school setting can provide youth with plenty of opportunities to learn and make the most of their museum experience. This ambitious project brought together team members from multiple universities and the Children’s Museum of Atlanta to mash up intelligent tutors, STEM, and informal learning.

What do you think of the possibilities with AI? Tweet us @EducatorCIRCLS and tell us about your innovative technology use and stay tuned for future blogs in this series about CIRCL Ambitious Mashups.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Learning Environments Webinar Resources

Educators, Artificial Intelligence, and the Future of Learning

Watch the recording.

Meet the practitioners:

Sarah Hampton
Middle and Secondary Math Teacher
Twitter: @S_W_Hampton

Sarah’s Statement on AI:
As a leader in Educator CIRCLS, and the wife of a programmer, it’s safe to say I’m somewhat of a technophile. I’m typically and happily an early adopter of ed tech. I’ve witnessed the positive difference the right technology under the right learning conditions can make, creating learning experiences that couldn’t be possible without it. Even so, my initial reaction to artificial intelligence (AI) in education was somewhere between skeptical and antagonistic. Like many teachers I’ve talked with, I’m concerned that using AI might weaken the human connection that’s so important for a healthy school environment. I’m concerned about equity and access issues around technology. I also have serious questions about my students’ privacy. While I remain concerned about these issues, as I started digging into what AI actually is (and isn’t), I’m also learning about the powerful new learning experiences that AI can create that aren’t possible without it. As AI continues to gain funding and attention, we teachers need to keep learning about it so we can offer our voices to the communities developing the tools they want us to use.

One particular paper has influenced me to keep participating in the conversation: Intelligence Unleashed: An argument for AI in education. Consider this:

teachers – alongside learners and parents – should be central to the design of AIEd tools, and the ways in which they are used. This participatory design methodology will ensure that the messiness of real classrooms is taken into account and that the tools deliver the support that educators need – not the support that technologists or designers think they need. Teachers who take part in these processes will gain increased technological literacy, new design skills, and a greater understanding of what AIEd systems can offer.

Luckin and colleagues also talk about the many other expertises involved in educational AIs such as anthropology, biology, computer science, linguistics, philosophy, psychology, neuroscience, sociology, and education. To her list, I would explicitly add the branches of social justice and ethics. For AI in education to be done well, we have to proceed with expert voices from each of these fields. It’s time for a serious risk/benefit conversation about the intended and unintended consequences. I can’t speak to the computer science or psychology aspects of an AI tool, but I have almost 15 years of practical experience with what does and doesn’t work in the classroom that I would like to contribute. For example, I’ve seen educational technology designers with good intentions offload too many pedagogical decisions to algorithms. When you haven’t been trained or had experience teaching, you may not fully understand which of those decisions are consequential for students. On the other hand, I’ve seen technologies developing in collaboration with teachers that improve the learning experience for students and teachers alike. Because specific AIs (like the ones currently being developed for classrooms) can be so powerful, the outcomes will certainly be high impact. It’s crucial that those impacts are positive for our students.

Read more of my thoughts on AI in education:

Diane W. Doersch
Director of Technology, Verizon Innovative Learning Schools at Digital Promise
Twitter: @DoerDi

Diane W. Doersch is the Director of Technology, Verizon Innovative Learning Schools at Digital Promise. Prior to serving at Digital Promise, she was a middle school classroom teacher, Director of Technology, then a Chief Technology and Information Officer for a large Wisconsin School district. With over 34 years in the field of education, Diane holds a BS in Elementary Education and a Masters Degree in Educational Technology. She serves as a CoSN (Consortium of School Networking) Board member and is CETL (Certified Educational Technology Leader) certified. She joined Digital Promise and the Verizon Innovative Learning Schools program to assist in closing the Digital Divide. All opinions expressed are her own and do not represent the stances of the organizations she serves.

Diane’s Statement on AI:
The addition of artificial intelligence (AI) into today’s society may have made our lives easier. Tasks that had previously taken a lot of time and human power to do have become increasingly easier and “just happen” with the addition of behind-the-scenes intelligence of AI. While I appreciate what AI has brought to our personal lives, as an EdTech leader, I continue to be cautious about how we incorporate artificial intelligence into what we do in our classrooms. School district decision-makers from the central office to the classroom level have a responsibility to become AI literate so they can ask discerning questions. We need to ask tough questions and explore topics surrounding student data privacy and security, ethics, and machine learning bias when it comes to AI. Thirty years in the future, I do not want a person to be held back by data that was collected and added to their profile as a student today. We need to ensure that we understand the implications of what we are doing at a societal level, before we implement technologies that may affect the futures of our students. I think the largest question we need to ask is, “If AI is the solution, what is the problem we are trying to solve?”

Favorite EdTech Organizations and their resources on AI:
CoSN (Consortium of School Networking) – Guidance on AI in the Classroom
Future of Privacy Forum – AI & Machine Learning

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Coded Bias Watch Party: May 3, 2021

Coded Bias film ad Watch with us

“Our faces may well be the final frontier of privacy, but regulations make a difference. Congress must act now to uphold American freedoms and rights.” – Joy Buolamwini

Join us in celebrating Joy Buolamwini and the other genius women shining a light on the perils of AI in Coded Bias.
WHEN: May 3rd, 2021 6pm ET/ 3pm PT
WHERE: Teleparty (formally known as Netflix Party)
Please note: You need your own unique Netflix account, to join a Teleparty. Please reach out to us if this presents a difficulty for you.

Coded Bias highlights the incredible work currently being done by organizations, data scientists, and activists on an international scale. Please take the time to learn more about the organizations below and opportunities to join their current campaigns and volunteer your time to a cause worth fighting for.This film’s goal is to challenge our unconscious biases and listen to one another as the first step toward radical change. Learn more by reading this discussion guide.

We’re excited that you’ll be participating in our Coded Bias Watch Party! Here are some instructions on how to install Teleparty on to your computer and get streaming. Teleparty only works on Google Chrome, so if you need to download Google Chrome please do that first.

In Google Chrome, proceed with the following steps:
1. Visit https://www.netflixparty.com/ and click the red button at the top right that says “INSTALL TELEPARTY”.
2. Once you are redirected to the Chrome Web Store, click “Add to Chrome” to finish installing Teleparty.
3. After installing Teleparty, you should see the “TP” button next to your address bar. If you do not see the “TP” button: Click on the puzzle icon located next to the address bar and pin the “TP” button to your Chrome toolbar
4. Please check back here or the Educator CIRCLS twitter account on May 3rd, 2021 after 2:30pm PT for the party URL
5. To join a party, click on the party URL: https://www.tele.pe/netflix/04e0d04e56a966ab?s=s143
6. For additional help and concerns, feel free to visit https://www.netflixparty.com/support.

If this date and time does not work for you, consider hosting your own watch party and let us know about it by tagging Educator CIRCLS and using the #AIandEdu

Teacher Support Webinar Resources

Educators, Artificial Intelligence, and the Future of Learning

Watch the recording.

Learn more about TeachFX:

Eduators, Artificial Intelligence, and the future of LearningVisions for how AI can support teachers and classrooms:

Meet the Practitioner Panelists:

Kip Glazer, Ed.D.
Principal, San Marcos High School, Santa Barbara, CA
Twitter: @kipglazer

Kip Glazer, Ed.D. is a High School Principal in Santa Barbara, CA. She had been a classroom teacher and technology coach prior to becoming a school administrator. She is a native of South Korea and moved to the United States as an adult. Her experience of being an immigrant who learned to speak English as an adult has shaped her personal and professional identity.
She has a doctorate in Learning Technologies from Pepperdine University. She is interested in all things technology and how they impact learning.

Kip’s Statement on AI:
As a literature lover, I am interested in the good, the bad, and the ugly side of technologies that have been extensively explored in science fiction. As much as I see the potential for incredible benefits of AI in the classrooms to support the teachers and students alike, I am equally concerned about its misuse by people whose intentions may not align with the mission of public education. I firmly believe that teachers and administrators must be provided with the right type of training to be able to support all our students, and we all have an important role to play when it comes to creating the future with AI.


Pati Ruiz, Ed.D.

Researcher, Digital Promise
Twitter: @pati_ru
Pati Ruiz is a researcher with the Learning Sciences Research team. Prior to joining Digital Promise, Pati spent 16 years as a high school computer science teacher, Spanish teacher, and K – 12 administrator. Pati has also taught public school leaders and pre-service teachers in both the U.S. and Mexico and helped design online learning experiences for computing and information technology faculty focused on diversifying undergraduate computing programs.

Pati’s Statement on AI:
As a former Spanish and computer science teacher, I think a lot about emerging technologies and how they apply in learning contexts. Recently, I have been focused on AI and how it affects students, their families, and communities. I am particularly interested in the consequences (intended and unanticipated) of these emerging AI technologies on historically excluded students, specifically Latinx, Black, Indigenous, and students with disabilities. Working with the Center for Integrative Research on Computing and Learning Sciences (CIRCLS), I have seen work ranging from Intelligent Tutors and systems designed to adapt and personalize learning, including some that are developing pedagogical agents and robots. I’ve also seen work that seeks to minimize bias and promote equity in AI, projects using computer vision, natural language processing and speech technologies. Of all of these efforts, I consider the work to promote equity and accountability in AI to be the most important and I created this list to focus on those issues.

Kelly Thomas
Special Education Teacher, Newport News School District (VA)

Kelly Thomas is currently a Special Education teacher in Newport News, VA. Previously, she worked as a Customer Operations Manager for Sentara Health Plan for ten years. Kelly also owned and operated her own daycare home for seven years which sparked her love of teaching. She started in elementary education as an instructional assistant before obtaining her Bachelor’s degree and teacher certification. She has been in education for more than twelve years in one capacity or another.

Kelly’s Statement on AI:
Before TeachFX, my direct instruction approach could be described as traditional, as it pertained to the ratio of teacher talk to student talk. I taught lesson content material and my students responded when asked to do so. Students were and still are encouraged to demonstrate lesson mastery in a “I do, We do, You do” gradual release method. The goal was and is for students to skillfully move from dependence to interdependence to independence. Now, thanks to Teach FX, my students feel more empowered to create and drive their learning experience. Teach FX has helped me become a skilled practitioner and/or facilitator towards that end. I now am aware of the ration between my teacher talk time and my student’s talk time. As a special education teacher, I am very aware of the importance of wait time, however I hadn’t considered that wait time occurs both after I speak as well as after my students speak. I am becoming more comfortable with periods of silence which used to be very uncomfortable for me. I find now that my students are more engaged with each other as well as myself! I love teaching and my students grow and thrive daily as they demonstrate their love of learning!