Category Archives: AI in Education

ASSISTments: A Forward-thinking Formative Assessment AI Ready to Use in Your Classroom Today

by Sarah Hampton

In my last post, I discussed the ways AI can enhance formative assessment. In this post, let’s take a look at the AI example I’m most excited about and how it’s already benefited 11,000 teachers!

ASSISTments seems both practical and forward thinking, a unique combination. Sometimes it can be frustrating getting excited about new technologies when they’re still in development and not yet ready for the classroom. But, unlike many cutting edge projects I read about, ASSISTments is ready to implement now.

In their own words, “ASSISTments is more than an assessment tool. It improves the learning journey, translating student assessment to skill building and mastery, guiding students with immediate feedback, and giving teachers the time and data insights to drive every step of their lesson in a meaningful way.”

ASSISTments works through a 4 step process to help you get started:

  1. Create assignments.
    Teachers select questions from existing or custom question banks. I was really impressed with the number and variety of sets already on the site. There are question sets from select open educational resources, textbook curricula, and released state tests ready to be assigned. There are also pre-made general skill-building and problem-solving sets. Note, everything the students see is assigned by you, the teacher.
  2. Assist students through immediate feedback.
    As students complete their assigned problems, they might receive hints and explanations to help them understand. Check out these screenshots of the platform. (See more in The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching)
  3. buggy message: No. You might be thinking that the area is one half base times height, but you are looking for the perimeter.

    Figure: An ASSISTments message shown just before the student hits the “done” button, showing two different hints and one buggy message that can occur at different points.

    Students immediately know if they’re right or wrong and can answer multiple times for partial credit, and, at the end of each assignment, each student receives an outcome report detailing their performance.

  4. Assess class performance.
    Data is also available to the teacher. Check out how easy they make it for teachers to gauge student progress.
  5. Symbols of Success. Quickly assess student and class performance using these symbols of success on your assignment report. Four symbols are Green check mark, green X, red X, red X with highlight.

    Figure. An easy way to visualize student performance.

    Figure. A popular ASSISTments report organizes student homework results in a grid–with tasks in columns and students in rows and enables teachers to quickly identify which problems to review and what the common errors were, as indicated by the annotations.

  6. Analyze answers together (with your students).
    After teachers see which problems were routinely missed, class time can be spent on the most needed concepts. As the ASSISTments site says, “Homework and classwork flow seamlessly into instruction of new material.” You can use the information you gain from the reports to determine what you will cover the next day. If everyone gets a concept you can move on and not waste valuable class time covering material that is understood. ASSISTments can also help support groups or personalized work.

This four-step process models what needs to happen in effective formative assessment, which was discussed in the second post of this series. Students engage in an assessment for learning (in this case it’s their homework), receive specific, supportive, timely, and focused feedback on how to close the gap between their current and desired understanding, and the results of the assessment are used to drive the next learning encounter.

Based on the undergirding principles of formative assessment, it’s no surprise that ASSISTments meets the rigorous What Works Clearinghouse standards without reservation, and receives a strong rating as an evidence-based PK-12 program by Evidence for ESSA. Based on a randomized controlled trial examining 2,728 seventh grade students in Maine, on average, the use of ASSISTments “produced a positive impact on students’ mathematics achievement at the end of a school year” equivalent to a student at the 50th percentile without the intervention improving to the 58th percentile with it. In addition, as seen in other formative assessment studies, the largest gains were seen by students with low prior achievement. (Online Mathematics Homework Increases Student Achievement) ASSISTments helps you by helping the students who need it the most and seems to allow you to be in multiple places at once!

One of the reasons I’m so excited about this program is because it was thoughtfully designed with teachers and students in mind. Neil and Cristina Heffernen, the co-creators of ASSISTments, write this in The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching.

“In many ways the list of problem sets in ASSISTments is a replacement for the assignment from the textbook or the traditional worksheet with questions on it. This way the teachers do not have to make a drastic change to their curriculum in order to start using the system. But more importantly they can make more sense of the data they get back since they are the ones who selected and assigned the problems.This is in contrast to the idea of an artificial intelligence automatically deciding what problem is best for each student. While this is a neat idea, it takes the teacher out of the loop and makes the computer tutorial less relevant to what is going on in the classroom.”

Exactly! I want formative assessment–in and out of the classroom–to meaningfully guide my instruction. Furthermore, I really appreciate that ASSISTments was designed to give teachers assistance in the workflow, to inform them about what students are learning, and, more importantly, not learning, so that teachers can make an informed decision on how to best help their students. I hope including teachers in the design process and helping teachers work more effectively with their students becomes a standard for educational AIs.

You need a school verified Google Classroom or paid Canvas account to use it, but ASSISTments itself is free! Unfortunately, our school uses a basic Canvas account, but customer service at ASSISTments allowed me to have a teacher role using a personal account so I could fully explore the program. I’m hopeful that this can be a transformative homework solution for math students! I think it will be worth your time to see what ASSISTments can offer you.

Note, I am not affiliated with ASSISTments and was not paid or asked to write about ASSISTments. I learned about it from CIRCL, and I was intrigued because I teach mathematics, but everything I discovered about it was through my research and my excitement about its potential is my own.

Watch this short video to learn more about ASSISTments, and read more about co-creator Neil Heffernen in his CIRCL Perspective.

Thank you to ASSISTments’ co-creator Cristina Hefferenen and to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Ambitious Mashups and CIRCLS

By CIRCL Educators

CIRCL, the Center for Innovative Research in Cyberlearning, has come to an end, but don’t worry, we’re getting ready to roll over to a new project called CIRCLS, the Center for Integrative Research in Computing and Learning Sciences. Stay tuned here and we’ll keep you apprised of any changes. Of course we’ll still be working to bridge practice and research and share what CIRCLS is doing and what we, as educators, are thinking about and facing in our work. If you’d like to get more involved with our work, please contact us! We’re looking for more educators to think and write with.

In the meantime, before we transition to CIRCLS, we want to dive into the final report from CIRCL. In it, we reflect on what we’ve learned since 2013 when CIRCL started. The world and technology have both changed quite a bit. Over the years, CIRCL worked with the approximately 450 projects funded by the National Science Foundation through their Cyberlearning program. The term Cyberlearning is a hard word to grasp, but the program and the projects in it were about using what we know about how people learn and creating new design possibilities for learning with emerging technology. In addition, in a 2017 report, we noted a strong commitment to equity in the CIRCL community. That commitment continues and is discussed in our final report with recommendations for future work to strengthen this important theme.

One thing we were struck by, in the review of the projects, was that there were many innovative designs to enhance learning with technology. As we tried to categorize the projects, we noticed that most contained combinations of multiple technologies, learning theories, and methods. While this may sound confusing, these combinations were purposefully designed to help augment learning and deepen our understanding of the technologies and how people learn. We looked for a term to use to explain this phenomenon and couldn’t find one, so we came up with a new one: Ambitious Mashups. In addition to the importance of mashing things up, the report also discusses:

Next week, we’ll be part of a webinar and talk through the different sections of the report. The webinar welcomes practitioners who want to learn more about research on emerging technologies from NSF-funded projects. While the projects aren’t always ready for use in a school today they offer ideas for new projects and new ways to think about how to use technology to support learning. The ambitious mashup projects think about learning in different ways and show how grounding activities in what we know about how people learn can help meet learning goals and outcomes. Ambitious mashups are usually exciting and give new ideas. CIRCL Educator Sarah Hampton says CIRCL reports can “help you get excited about the future landscape of education.”

We invite you to join us to learn more about Ambitious Mashups and Reflections on a Decade of Cyberlearning Research Webinar
Date: 10/28/2020
Time: 4 pm Eastern / 3 pm Central / 1 pm Pacific

Register

 


 

Book Review: You Look Like a Thing and I Love You

By Judi Fusco

During CIRCL Educators’ Summer of Artificial Intelligence (AI), I read the book You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place1, by Dr. Janelle Shane. I got the recommendation for it from fellow CIRCL Educator, Angie Kalthoff.

I found the book helpful even though it is not about AI in education. I read and enjoyed the e-book and the audio version. As I started writing this review, I was driving somewhere with one of my teenagers and I asked if we could listen to the book. She rolled her eyes but was soon laughing out loud as we listened. I think that’s a great testament to how accessible the book is.

Teaching an AI

Many of us use AI products like Siri or Alexa, on a regular basis. But how did they get “smart?” In the book, Dr. Shane writes about the process of training machine learning2, systems to be “intelligent”. She tells us how they certainly don’t start smart. Reading about the foibles, flailings, and failings that she has witnessed in her work helped me understand why it is so important to get the training part right and helped me understand some of what needs to be considered as new products are developed.

Dr. Shane starts out comparing machine learning and rule-based AI systems, which are two very different types of AI systems. Briefly, a rule-based system uses rules written by human programmers as it works with data to make decisions. By contrast, a machine learning algorithm3 is not given rules. Instead, humans pick an algorithm, give a goal (maybe to make a prediction or decision), give example data that helps the algorithm learn4, and then the algorithm has to figure out how to achieve that goal. Depending on the algorithm, they will discover their own rules (for some this means adjusting weights on connections between what is input and what they output). From the example data given to the algorithm, it “learns” or rather the algorithm improves what it produces through its experience with that data. It’s important to note that the algorithm is doing the work to improve and not a human programmer. In the book, Dr. Shane explains that after she sets up the algorithm with a goal and gives it training data she goes to get coffee and lets it work.

Strengths and Weaknesses

There are strengths and weaknesses in the machine learning approach. A strength is that as the algorithm tries to reach its goal, it can detect relationships and features of details that the programmer may not have thought would be important, or that the programmer may not even have been aware of. This can either be good or bad.

One way it can be good or positive is that sometimes an AI tries a novel solution because it isn’t bogged down with knowledge constraints of rules in the world. However, not knowing about constraints in the world can simultaneously be bad and lead to impossible ideas. For example, in the book, Dr. Shane discusses how in simulated worlds, an AI will try things that won’t work in our world because it doesn’t understand the laws of physics. To help the AI, a human programmer needs to specify what is impossible or not. Also, an AI will take shortcuts that may lead to the goal, but may not be fair. One time, an AI created a solution that took advantage of a situation. While it was playing a game, an AI system discovered there wasn’t enough RAM in the computer of its opponent for a specific move. The AI would make that move and cause the other computer to run out of RAM and then crash. The AI would then win every time. Dr. Shane discusses many other instances where an AI exploits a weakness to look like it’s smart.

In addition, one other problem we have learned from machine learning work, is that it highlights and exacerbates problems that it learns from training data. For example, much training data comes from the internet. Much of the data on the internet is full of bias. When biased data are used to train an AI, the biases and problems in the data become what guide the AI toward its goal. Because of this, our biases, found on the internet, become perpetuated in the decisions the machine learning algorithms make. (Read about some of the unfair and biased decisions that have occurred when AI was used to make decisions about defendants in the justice system.)

Bias

People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)

In the book, Dr. Shane says, machine learning occurs in the AI algorithms by “copying humans” — the algorithms don’t find the “best solution” or an unbiased one, they are seeking a way to do “what the humans would have done” (p 24) in the past because of the data they use for training. What do you think would happen if an AI were screening job candidates based on how companies typically hired in the past? (Spoiler alert: hiring practices do not become less discriminatory and the algorithms perpetuate and extend biased hiring.)

A related problem comes about because machine learning AIs make their own rules. These rules are not explicitly stated in some machine learning algorithms so we (humans aka the creators and the users) don’t always know what an AI is doing. There are calls for machine learning to write out the rules it creates so that humans can understand them, but this is a very hard problem and it won’t be easy to fix. (In addition, some algorithms are proprietary and companies won’t let us know what is happening.)

Integrating AIs into our lives

It feels necessary to know how a machine is making decisions when it is tasked with making decisions about people’s lives (e.g., prison release, hiring, and job performance). We should not blindly trust how AIs make decisions. AIs have no idea of the consequences of its decisions. We can still use them to help us with our work, but we should be very cautious about the types of problems we automate. We also need to ensure that the AI makes it clear what they are doing so that humans can review the automation, how humans can override decisions, and the consequences of an incorrect decision by an AI. Dr. Shane reminds us that an “AI can’t be bribed but it also can’t raise moral objections to anything it’s asked to do” (p. 4).

In addition, we need to ensure the data we use for training are as representative as possible to avoid bias, make sure that the system can’t take shortcuts to meet its goal, and we need to make sure the systems work with a lot of different types of populations (e.g., gender, racial, people with learning differences). AIso, an AI is not as smart as a human, in fact, Dr. Shane shares that most AI systems using machine learning (in 2019) have the approximate brainpower of a worm. Machine learning can help us automate tasks, but we still have a lot of work to do to ensure that AIs don’t harm or damage people. 

What are your thoughts or questions on machine learning or other types of AI in education? Tweet to @CIRCLEducators and be part of the conversation.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

See a recent TED Talk by author Janelle Shane.


Notes:

  1. Read the book to find out what the title means!
  2. Machine learning is one of several AI approaches.
  3. Machine Learning is a general term that also includes neural networks and the more specialized neural network class of Deep Learning. Note also, a famous class of ML algorithms that use rules are decision-tree algorithms.
  4. Some algorithms “learn” with labeled examples and some without, but that’s a discussion beyond the scope of this post.
red heart icon

Algorithms, Educational Data, and EdTech: Anticipating Consequences for Students

By Pati Ruiz and Amar Abbott

The 2020-2021 school year is underway in the U.S. and for many students, that means using edtech tools in a fully online or blended learning environment. As educators, it is our responsibility to consider how students are using edtech tools and what the unanticipated consequences of using these tools might be. Before introducing edtech tools to students, administrators should spend time considering a range of tools to meet the needs of their students and teachers. In a recent blog post, Mary Beth Hertz described the opportunities for anti-racist work in the consideration and selection of the tools students use for learning. Hertz identified a series of questions educators can ask about the tools they will adopt to make sure those tools are serving the best interest of all of their students. Two of the questions in Hertz’s list ask us to consider data and algorithms. In this post, we focus on these two questions and Hertz’s call to “pause and reflect and raise our expectations for the edtech companies with which we work while also thinking critically about how we leverage technology in the classroom as it relates to our students of color.” The two questions are:

  1. How does the company handle student data? and,
  2. Has the company tested its algorithms or other automated processes for racial biases?

To help us better understand the issues around these two questions, we will discuss the work of two researchers: Dr. Safiya Noble and Dr. Ruha Benjamin. This post expands on our previous post about Dr. Noble’s keynote address — The Problems and Perils of Harnessing Big Data for Equity & Justice — and her book, Algorithms of Oppression: How Search Engines Reinforce Racism. Here, we also introduce the work of Dr. Ruha Benjamin, and specifically the ideas described in her recent book Race After Technology: Abolitionist Tools for the New Jim Code.

Student Data

In order to understand how companies handle student data, we need to first consider the concept of data. Data are characteristics or information that are collected in a manner capable of being communicated or manipulated by some process (Wiktionary, 2020). In Dr. Noble’s keynote speech, she discusses the social construction of data and the importance of paying attention to the assumptions that are made about the characterization of data that are being collected. In her book, Dr. Noble shows how Google’s search engine perpetuates harmful stereotypes about Black women and girls in particular. Dr. Benjamin describes the data justice issues we are dealing with today as ones that come from a long history of systemic injustice in which those in power have used data to disenfranchise Black people. In her chapter titled Retooling Solidarity, Reimagining Justice, Dr. Benjamin (2019) encourages us to “question, which humans are prioritized in the process” (p. 174) of design and data collection. With edtech tools, the humans who are prioritized in the process are teachers and administrators, they are the “clients.” We need to consider and prioritize the affected population, students.

 

When it comes to the collection and use of educational data and interventions for education, there is much work to be done to counteract coded inequities of the “techno status quo.” In her keynote, Dr. Noble offered a list of suggestions for interventions including:

 

  1. Resist making issues of justice and ethics an afterthought or additive
  2. Protect vulnerable people (students) from surveillance and data profiling

 

Center Issues of Justice and Ethics

As described by Tawana Petty in the recent Wired article Defending Black Lives Means Banning Facial Recognition, Black communities want to be seen and not watched. The author writes:

“Simply increasing lighting in public spaces has been proven to increase safety for a much lower cost, without racial bias, and without jeopardizing the liberties of residents.”

What is the equivalent of increasing light in education spaces? What steps are being taken to protect students from surveillance and data profiling? How are teachers and students trained on the digital tools they are being asked to use? How are companies asked to be responsible about the kinds of data they collect?

Schools have legal mandates meant to protect students’ rights, such as the Family Educational Rights and Privacy Act (FERPA) in the U.S. and other policies that protect student confidentiality regarding medical and student educational records. Although a lot of forethought has gone into protecting students’ confidentiality, has the same critical foresight implemented when purchasing hardware and software?

 

In Dr. Noble’s keynote speech, she described the tracking of students on some university campuses through the digital devices they connect to campus Internet or services (like a Library or Learning Management System). The reasoning behind tracking students is to allocate university resources effectively to help the student be successful. However, in this article, Drew Harwell writes about the complex ethical issues regarding students being digitally tracked and an institutions’ obligation to keep students’ data private. So, before software or hardware is used or purchased, privacy and ethics issues must be discussed and addressed. Special energy needs to be dedicated to uncovering any potential “unanticipated” consequences of the technologies as well. After all, without the proper vetting, a bad decision could harm students.

Protect Vulnerable Students

Protecting vulnerable students includes being able to answer Hertz’s question: “Has the company tested its algorithms or other automated processes for racial biases?” But, even when the company has tested its algorithms and automated processes, there is often still work to be done because “unanticipated” results continue to happen. A Twitter spokesperson Liz Kelley recently posted a tweet saying: “thanks to everyone who raised this. we tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do.”

She was responding to the experiment shown below where user @bascule posted: “Trying a horrible experiment…Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?”

Twitter’s machine learning algorithm chose to center the white face instead of the black face when presented with where the white profile picture was shown on top, white space in between, followed by the black profile picture. But it did the same when the black profile picture was shown on top, white space in between, followed by the white profile picture.

A horrible twitter experiment with face recognition. The algorithm selects the white face regardless of placement

As we can see, the selection and use of tools for learning is complicated and requires balancing many factors. As CIRCL Educators we hope to provide some guidance to ensure the safety of students, families, and their teachers. Additionally, we are working to demystify data, algorithms, and AI for educators and their students. This work is similar to the work being done by public interest technologists in the communities and organizations described by both Noble and Benjamin. We don’t have all of the answers, but these topics are ones that we will continue to discuss and write about. Please share your thoughts with us by tweeting @CIRCLEducators.

 

References

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press.

data. (2020, August 12). Wiktionary, The Free Dictionary. Retrieved 15:31, August 26, 2020 from https://en.wiktionary.org/w/index.php?title=data&oldid=60057733.

Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

circuit board

AI and Formative Assessment

by Sarah Hampton

In my last post, I talked about effective formative assessments and their powerful impact on student learning. In this post, let’s explore why AI is well-suited for formative assessment.

  1. AI can offer individualized feedback on specific content.
  2. AI can offer individualized feedback that helps students learn how to learn.
  3. AI can provide meaningful formative assessment outside of school.
  4. AI might be able to assess complex and messy knowledge domains.

Individualized Feedback on Content Learning

I think individualized feedback is the most powerful advantage of AI for assessment. As a teacher, I can only be in one place at a time looking in one direction at a time. That means I have two choices for feedback: I can take some time to assess how each student is doing and then address general learning barriers as a class, or I can assess and give feedback to students one at a time. In contrast, AI allows for simultaneous individualized feedback for each student.

“AI applications can identify pedagogical materials and approaches adapted to the level of individual students, and make predictions, recommendations and decisions about the next steps of the learning process based on data from individual students. AI systems assist learners to master the subject at their own pace and provide teachers with suggestions on how to help them.” (Trustworthy artificial intelligence (AI) in education: promises and challenges)

Going one step further, AI has the ability to assess students without disrupting their learning by something called stealth assessment. While students work, AI can quietly collect data in the background such as the time it takes to answer questions, which incorrect strategies they tried before succeeding, etc. and organize them into a dashboard so teachers can use that data to inform what to focus on or clear up the next day in class. Note: As a teacher, I want the AI to help me do what I do best. I definitely want to see what each student needs in their learning. Also, as a teacher, I want to be able to control when the AI should alert me about intervening (as a caring human) instead of it trying to do something on its own that it isn’t capable of doing well.

Feedback That Helps Students Learn How to Learn

“Two experimental research studies have shown that students who understand the learning objectives and assessment criteria and have opportunities to reflect on their work show greater improvement than those who do not (Fontana & Fernandes, 1994; Frederikson & White, 1997).” (The Concept of Formative Assessment)

In the last post, I noted that including students in the process of self-assessment is critical to effective formative assessment. After all, we ultimately want students to be able to self-regulate their own learning. But, as one teacher, it can sometimes be difficult to remind students individually to stop and reflect on their work and brainstorm ways to close the gap between their current understanding and their learning goal. By contrast, regulation prompts can be built into AI software so students routinely stop and check for understanding and defend their reasoning, giving students a start on learning how to self-regulate.

For example, this is done in Crystal Island, an AI game-based platform for learning middle school microbiology, “students were periodically prompted to reflect on what they had learned thus far and what they planned to do moving forward…Students received several prompts for reflection during the game. After completing the game or running out of time, students were asked to reflect on their problem-solving experience as a whole, explaining how they approached the problem and whether they would do anything differently if they were asked to solve a similar problem in the future.” (Automated Analysis of Middle School Students’ Written Reflections During Game-Based Learning)

      In-game reflection prompt presented to students in Crystal Island

Meaningful Formative Assessment Outside of School

Formative assessment and feedback can come from many sources, but, traditionally, the main source is the teacher. Students only have access to their teacher inside the classroom and during class time. In contrast, AI software can provide meaningful formative assessment anytime and anywhere which means learning can occur anytime and anywhere, too.

In the next post, we’ll look at how one AI tool, ASSISTments, is using formative assessment to transform math homework by giving meaningful individualized feedback at home.

Assessing Complexity and Messiness

In the first post of the series, I discussed the need for assessments that can measure the beautiful complexity of what my students know. I particularly like the way Griffin, McGaw, and Care state it in Assessment and Teaching of 21st Century Skills:

“Traditional assessment methods typically fail to measure the high-level skills, knowledge, attitudes, and characteristics of self-directed and collaborative learning that are increasingly important for our global economy and fast-changing world. These skills are difficult to characterize and measure but critically important, more than ever. Traditional assessments are typically delivered via paper and pencil and are designed to be administered quickly and scored easily. In this way, they are tuned around what is easy to measure, rather than what is important to measure.”

We have to have assessments that can measure what is important and not just what is easy. AI has the potential to help with that.

For example, I can learn more about how much my students truly understand about a topic from reading a written response than a multiple choice response. However, it’s not possible to frequently assess students this way because of the time it takes to read and give feedback on each essay. (Consider some secondary teachers who see 150+ students a day!)

Fortunately, one major area for AI advancement has been in natural language processing. AIs designed to evaluate written and verbal ideas are quickly becoming more sophisticated and useful for providing helpful feedback to students. That means that my students could soon have access to a more thorough way to show what they know on a regular basis and receive more targeted feedback to better their understanding.

While the purpose of this post is to communicate the possible benefits of AI in education, it’s important to note that my excitement about these possibilities is not a carte blanche endorsement for them. Like all tools, AI has the potential to be used in beneficial or nefarious ways. There is a lot to consider as we think about AI and we’re just starting the conversation.

As AI advances and widespread classroom implementation becomes increasingly more possible, it’s time to seriously listen to those at the intersection of the learning sciences and artificial intelligence like Rose Luckin. “Socially, we need to engage teachers, learners, parents and other education stakeholders to work with scientists and policymakers to develop the ethical framework within which AI assessment can thrive and bring benefit.” (Towards artificial intelligence-based assessment systems)

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

We are still at the beginning of our conversation around AI in Education. What do you think? Do the possible benefits excite you? Do the possible risks concern you? Both? Let us know @CIRCLEducators.

Students sit around a large paper on the floor and draw on, look at, or point to it.

Considering Techquity in the Classroom

By Merijke Coenraad

Merijke Coenraad is a PhD Candidate in the Department of Teaching & Learning, Policy & Leadership in the College of Education at the University of Maryland. She is a former middle school teacher. Her research focuses on the intersections of educational technology and equity including the creation of materials, platforms, and experiences in partnership with teachers and youth through participatory design methods.

Flashback to a Spanish Classroom (2016)

Chromebooks out. Hushed silence. Each student leaned over their computer. Tension in the air. I yell, “GO! “ and with one word, the room erupts and groups hurriedly work together to identify vocabulary words before their classmates. In loud whispers students ask their partners for words, “Calcentines, who has socks?” One mistake and the group will have to start over; the stakes are high, and no star student can single handedly win the game for their peers. 

Quizlet transformed flashcards, a time consuming (and often lost or forgotten) physical learning tool into a digital learning experience. My students practiced their vocabulary words through drills and games all week and on Friday, we played Quizlet Live.

When I was still in the classroom, I loved to bring new technology into my social studies and Spanish lessons. I got excited discovering tools like EdPuzzle and Padlet when they were first breaking onto the education stage. With 1 to 1 Chromebooks in my middle school classroom, there was hardly a class period where students were not somehow connected to technology and each of these technologies meant creating a new account. Looking back, I realize that I was naïve while teaching. As I brought tool after tool to my students, I didn’t think deeply about the data collection ramifications and the way that the very tools that could enhance learning might be treating my students inequitably and perpetuating the structural racism and human biases that I worked each day to dismantle. The educational technology that I brought into my classroom had positive effects, but it also had hidden consequences, most of which I might never know.

Four years after leaving the classroom to begin my PhD, my work focuses on one thing, Techquity, or the intersection of technology and equity. This focus is driven by the students I taught and the many times I saw technology act as both an access point and a barrier to their education. Even though I wasn’t thinking about data collection, algorithmic bias, and the effects of AI for the students in my classroom, I was still focused on how technology helped and hindered my students’ education. But those barriers and hindrances go beyond the devices and internet access I have long considered. In the last year, I have learned a lot about forces within and around technology that cause inequities. I have learned about the Coded Gaze of AI Technologies from Joy Buolamwini and the New Jim Code from Ruha Benjamin. I’ve learned about the biases inherent in the very design of technologies with Sara Wachter-Boettcher and how algorithms can be Weapons of Math Destruction from Cathy O’Neil. It has led me to focus on how I can not only be more cognizant of the biases of technology, but also teach students about them.

Techquity: Co-designing with Kids

To learn more about what kids think about Techquity concerns, I partnered with a youth design team to hear what they had to say about Techquity and learn which Techquity concerns were of the most interest to them. I find that kid insight is critical whenever I am discovering new topics to teach to students. The team was constructed of 7 Black youth between the ages of 8 and 13 who meet twice a week to design technologies and learn about being a designer.

Let’s look a little bit at what the kids had to say about Techquity.

While they didn’t have the vocabulary to name algorithmic bias or biases in voice recognition technology, the kids quickly began offering examples of how technologies can be good and bad and how even single technologies can have good and bad sides. For example, one group identified Siri as helpful because “she” can give information without typing, but they also were worried that Siri doesn’t always understand them and “SIRI CAN LISTEN TO US!!!!” While the AI in their phones allowed the students to access all sorts of information, they were not immune to considerations of what it meant for a device to always be listening for, “Hey Siri…”

As our conversation turned and I introduced the kids to some common examples of Techquity concerns such as data collection, targeted advertising, misidentification by AI, and non-diverse tech design teams, the kids continued to describe their own examples. They could recollect times when they received targeted advertising based on location or a recent website visit.

Techquity Concerns

10 common Techquity concerns we discussed are:

  • Algorithms (computer programs) don’t treat everyone fairly
  • Technology development teams are frequently not diverse
  • Alexa, Google Home, and Siri are always listening to me
  • I get personalized ads based on data companies collect about me
  • Technology is not always accessible for individuals with disabilities
  • Companies sell my data
  • Sensors and systems like Alexa, Google Home, and Siri get confused about how I look or what I say
  • People don’t understand how technology works
  • Machine learning and facial recognition isn’t trained well enough to recognize everyone

The kids each ranked the 10 Techquity concerns from “very important to me” to “not very important to me.” The two most highly ranked ideas were algorithmic bias and non-diverse tech companies. The kids were especially concerned that individuals who looked like them were not being represented on design teams when they themselves were and what this meant for the technologies being designed.

As their final design task, the kids designed ways to teach other kids about Techquity by drawing their ideas out on an online platform mimicking paper and pencil. Interestingly, the kids didn’t want to move away from technology just because it could be biased, they just wanted it to be created in more equitable ways and to be used to teach others. Their teaching often included advanced algorithms and even AI. They designed scenarios using robots and adaptive software to allow other kids to experience obvious Techquity concerns and learn from their experiences. One girl, Persinna, explicitly discussed the three-member design team shown in her game as having 2 girls and 1 boy because “that is Techquity.” Kabede felt very strongly that data collection by tech companies was a big concern. He started making connections to actual tools he knows such as DuckDuckGo, a search engine that does not profile users and focuses on user privacy.

What I Would Consider Now If I Were Still a Teacher

I’d start from what these kids already know about Techquity and how algorithms and AI are affecting their lives and build on that. I would educate students about the biases inherent in Google searches, which sort not by popularity of links as is commonly assumed, but based on user profiles and advertising. I would use Kabede’s recommendation and have students use a search engine like DuckDuckGo to prevent tracking and allow for private searches. I would challenge students to think about where algorithms, AI, and technology design are already affecting their lives and how technologies might work better for some individuals than they do for others. We would talk about the sensors in automatic sinks, paper towel dispensers, and medical devices and how those sensors work based on light, but oftentimes work better for people with lighter skin. We would discuss Joy Buolamwini’s experiences and work and talk about how machine learning training sets are often not adequate to identify all people well and how this has direct consequences for the use of AI in policing and surveillance.

While the students in my classroom wouldn’t be the ones causing the technology bias, I would make sure they were aware of it and how it had direct implications for their lives. Most of all, I would base these discussions in students’ lived experiences. Just like the kids on the design teams, it is inevitable that my students experienced technology bias, they just might not have had words for it or known why it was happening. The more I could teach my students and bring Techquity concerns to their knowledge, the more they could protect themselves (and their communities) and make educated decisions about their lives with technology. I know that my middle school students wouldn’t give up their technology and knowing about the biases held by the designers of that technology probably wouldn’t change their opinions of technology being, as Joshua said in the design session, “the best thing ever,” knowing more about their digital footprint and how companies are using their information gives them a small advantage. In this case, knowledge of Techquity concerns could give them power over their data and their technology use.

Three Visionary Projects Using AI in Education

by Sarah Hampton

One of my favorite things about our Summer of AI is learning about cyberlearning projects and how they might benefit future students. In this post, I want to showcase three projects that caught my attention because they use AI in different ways and for different ages. When we began in June, I was thinking AI might be mostly about robots in STEM classes or general AIs like Siri or Alexa. But now, after learning about these three example projects and many more, I realize that the future might be more about specialized AIs giving teachers information and ways to personalize learning. Sometimes this is behind the scenes, like the first project I highlight. Sometimes, like the third project, a robot is used in Mandarin class (instead of in a technology class). Let us know what you think about these projects and their potential to change how you teach and learn @CIRCLeducators!

  1. Project:

Human/AI Co-Orchestration of Dynamically-Differentiated Collaborative Classrooms

""

Figure 1. Left: A teacher using Lumilo while her students work with Lynette, an ITS for equation solving, in class (from Holstein et al., 2018b); Right: A point-of-view screenshot through Lumilo.

“This project will create and demonstrate new technology that supports dynamically-differentiated instruction for the classroom of the future. This new vision centers on carefully-designed partnerships between teachers, students, and artificial intelligence (AI). AI-powered learning software will support students during problem-solving practice, providing either individual guidance (using standard intelligent tutoring technology) or guidance so students can effectively collaborate and tutor each other. These learning activities are constantly adjusted to fit each student’s needs, including switching between individual or collaborative learning. The teacher “orchestrates” (instigates, oversees, and regulates) this dynamic process. New tools will enhance the teacher’s awareness of students’ classroom progress. The goal is to have highly effective and efficient learning processes for all students, and effective “orchestration support” for teachers.”

Why I’m Interested:

  • Capitalizes on the strengths of students, teachers, and technology
  • Creatively addresses differentiation and individualized instruction
  • Promotes collaborative learning
  • Relevant for all subjects

Learn More:

http://kenholstein.com/JLA_CodesignOrchestration.pdf
Teacher smart glasses (Lumilo)

  1. Project:

Using Gaze in a Reading Tutor

Icon of person reading a book

“The big question the PIs are addressing in this project is how to unobtrusively track silent reading of novice readers so as to be able to use an intelligent tutoring system to aid reading comprehension…This pilot project builds on previous work in vision and speech technology, sensor fusion, machine learning, user modeling, intelligent tutors, and eye movements in an effort to identify the feasibility of using eye tracking techniques, along with other information collected from an intelligent reading tutor, to predict reading difficulties of novice/young readers.”

“The project’s most important potential broader impacts is in establishing a foundation for exploiting gaze input to build intelligent computing systems that can be used to help children with reading difficulties learn to read and read to learn.”

Why I’m Interested:

  • Targets reading comprehension which would help students in all subjects
  • Could decrease student frustration
  • May identify and intercept issues early translating to great academic gains over time
  • Interacts personally with all students simultaneously in ways one teacher could not
  • Allows for meaningful individual reading practice

Learn More:

Perhaps because this was a pilot program, no further information has been published. As a teacher looking toward the future and wanting to shape the conversation as it’s happening, I want to know more!  I want to know what happened during this exploratory project and how similar projects could build on their work.

  1. Project

Transforming World Language Education using Social Robotics

   Female looks at a humanoid robot that stands on a table Two people look at humanoid robot

Figure 2. Students interacting with RALL-E robots.

“The social robot being developed in this project is designed to act as a language partner for students learning a foreign language, in this case those learning Chinese. It augments classroom instruction, providing for the learner a robot companion to converse with. The hypothesis is that social robots can make interactions with language speakers more exciting and more accessible, especially for less commonly taught languages. The embodied robot is designed not only to converse with learners but also to point and nod and gesture at particular people and objects, helping to direct the attention of learners and interact socially with learners in ways that a non-embodied simulation cannot.”

Why I’m Interested:

  • Opens access for learning languages like Mandarin and Hindi that are spoken by hundreds of millions of people around the world but are not routinely offered in American schools
  • Could easily be used in formal and informal settings
  • Applies robotics beyond STEM subjects

Learn More:

https://circlcenter.org/interactive-robot-for-learning-chinese/

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

References

Holstein, K., McLaren, B. M., & Aleven, V. (2018b). Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. In C. Penstein Rosé, R. Martínez-Maldonado, U. Hoppe, R. Luckin, M. Mavrikis, K. Porayska-Pomsta, B. McLaren, & B. du Boulay (Eds.), Proceedings of the 19th International Conference on Artificial Intelligence in Education (AIED 2018), 27–30 June 2018, London, UK. (pp. 154–168). Springer, Cham. http://dx.doi.org/doi.org/10.1007/978-3-319-93843-1_12

A group of 5 people work at a table.

My Students are Deep, Complex, and Beautiful. Assessments Should Be, Too.

by Sarah Hampton

I have a love/hate relationship with standardized assessments. I understand they’re necessary so we can get a picture of what our students are taking away from from their school experience. BUT, there are so many times when I find myself making decisions about my instruction based on how I think it will affect the summative assessment. Sometimes I am even choosing between holistic practices advocated by the standards themselves for learning and superficial practices I’ve seen work to improve test scores. Teachers are accountable to the test scores, so there is pressure to make decisions that give quick gain but may not result in long-term learning.

This conflict seems to be a common experience. When I talk to fellow educators about new pedagogies, tools, or curricula, one of the very first questions asked is always, “How will this affect my test scores?” Bottom line: high stakes assessments are still driving instructional decisions, for better or worse.

If the summative assessments were perfectly designed to really assess learning, then this would be ideal, right? If tests were truly an accurate reflection of everything we want our students to know for future learning, to be able to do, and help them be, then making instructional decisions that cause those scores to increase would mean our students were getting closer to those goals. However, I have a major bone to pick: Current standardized tests are an oversimplification of the depth, complexity, and beauty of my students.

I get that it’s important to prove that schools are doing what we say they’re doing. It’s important that we’re held accountable for all students learning in ways that benefit them. My problem? I don’t think current standardized tests are actually proving that students are learning. Most tests (even the adaptive ones I’ve seen) are still multiple choice, true/false, drop down, matching, or, occasionally, fill in the blank or short answer. The outcomes I want from my students simply cannot be properly evaluated that way! In addition, I shouldn’t feel pressured to sacrifice meaningful instructional activities that support my students’ complexity, depth, and beauty in order to make a shallow snapshot look better.

State and national assessments aren’t going away. I’m not even suggesting that they should. As a math and science teacher, I’m all about making informed decisions based on the right data. Unfortunately, as Pellegrino and Quellmalz note in Perspectives on the Integration of Technology and Assessment

“in the realm of high-stakes assessment for No Child Left Behind (NCLB) accountability, a number of regulatory, economic, and logistical issues have constrained the breadth and depth of the content and performance standards assessed in annual on-demand tests. Standard, multiple-choice item formats continue to dominate large-scale, computer-based, high-stakes testing, resulting in an over reliance on simple, highly structured problems that tap fact retrieval and the use of algorithmic solution procedures.”

So, because simple, highly structured problems are the things most current tests assess, teachers are unintentionally incentivized to promote surface level thinking around oversimplified problems.

What if there was a way to design an assessment that wasn’t constrained that way? What if assessments were made of messy problems and could honor multiple student pathways to solutions? What if they could analyze and reward student thinking on a topic? Better standardized assessments could be one possible way to reconcile what and how I want my students to learn with how they can express what they know.

Here comes the good news: the learning sciences community is integrating technology in ways that are rapidly improving assessments so they can give us a more complex and accurate picture of what our students know and do! That means that investing in the complexity of our students would translate to better scores because the assessment could give us a more comprehensive image of their understanding. It’s like the assessment world is heading into 3-d printing instead of 2-d black and white line drawings! No more tug of war between deep, meaningful instruction and what I feel like I have to do for the test because, now, they can be the same thing. Yay!

Consider My Science Tutor (MyST), a type of NSF funded project called an intelligent virtual tutor. MyST uses artificial intelligence to interpret and assess how well students understand science concepts by analyzing their dialog. The principal investigators of MyST say that “its goal is to assess student understanding of concepts rather than facts, which is very important to prepare students and the future workforce in STEM.” Student understanding of concepts rather than facts? That’s what I want!

Before advancements like the ones that make MyST a reality, a student’s answer on an assessment had to exactly match the correct answer designated by the programmer. That’s why standardized tests relied so heavily on multiple choice, T/F, etc. Now, machine learning allows computers like MyST to ‘learn’ what understanding a topic sounds like by analyzing speech from multiple dialog sessions–over 13,000 sessions for MyST, in fact. Then, it can analyze how well a new student has learned the concept based on what MyST ‘knows’. This is the essence of artificial intelligence: a machine doing what was previously thought to be unique to humans. In Towards artificial intelligence-based assessment systems, Rose Luckin says, “AI is a powerful tool to open up the ‘black box of learning’, by providing a deep, fine-grained understanding of when and how learning actually happens.”

This level of comprehension analysis was formerly only possible through one-on-one evaluation by teachers or researchers–an unrealistic burden on human time. Now, as Pellegrino and Quellmalz say:

“In addition to assessment of student knowledge and skills in highly structured problems with one right answer, technology can also support the design of complex, interactive tasks that extend the range of knowledge, skills, and cognitive processes that can be assessed.

The implications of this could transform the assessment landscape by allowing multiple computers to ‘listen’ to each student at once and check for genuine understanding. There’s something beautiful about that. Even more, when assessments can help teachers evaluate complex understanding using open-ended problems, teachers are incentivized to promote deep thinking of deep problems. My students are deep, complex, and beautiful. Assessments should be, too.

How about you? How do you resolve the ever present conflict between good teaching and standardized assessments? Do you think standardized assessments cover important issues? What do you think about creating new assessments using AI? Join the conversation by tweeting @CIRCLEducators.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Five CIRCL Educators stand next to a Cyberlearning 2019 banner

Harnessing Educational Data: Discussing Dr. Safiya Noble’s Keynote from Cyberlearning 2019

By Pati Ruiz, Sarah Hampton, Judi Fusco, Amar Abbott, and Angie Kalthoff

In October 2019 the CIRCL Educators gathered in Alexandria, Virginia for Cyberlearning 2019: Exploring Contradictions in Achieving Equitable Futures (CL19). For many of us on the CIRCL Educators’ team it was the first opportunity for us to meet in person after working collaboratively online for years. In addition, CL19 provided us with opportunities to explore learning in the context of working with technology and meet with researchers with diverse expertise and perspectives. We explored the tensions that arise as research teams expand the boundaries of learning, and explored how cyberlearning research might be applied in practice.

One of the topics, we thought a lot about at CL19, is algorithms. We had the opportunity to hear from keynote speaker Safiya Noble, an Associate Professor at UCLA, and author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press). In her Keynote, The Problems and Perils of Harnessing Big Data for Equity & Justice, Dr. Noble described the disturbing findings she uncovered when she started investigating algorithms related to search. She was not satisfied with the answer that the way algorithms categorized people, particularly girls of color, was what “the public” wanted. She dug in deeper and what she said really made us think.

This keynote is related to some of the conversations we’re having about Artificial Intelligence (AI), so we decided to re-watch the recorded version and discuss the implications of harnessing Big Data for students, teachers, schools, and districts. Big Data is crucial in much work related to AI. Algorithms are crucial. We bring this into our series on AI because even though math and numbers seem like they are not culturally-biased, there are ways that they are and can be used to promote discrimination. In this post, we don’t summarize the keynote, but we tell you what really got us thinking. We encourage you to watch it too.

Besides discussing algorithms for search, Dr. Noble also discusses implications of technology, data, and algorithms in the classroom. For example, Dr. Noble shared how she breaks down how a Learning Management System works for her students so that they know how the technology they are using can inform their professors of how often and how long they log into the system (among other things). She said they were often surprised that their teachers could learn these things. She went on to say:

“These are the kinds of things that are not transparent, even to the students that many of us are working with and care about so deeply. “

Another idea that particularly resonated with us, as teachers, from the talk is the social value of forgetting. Sometimes there is value in digitally preserving data, but sometimes there is more value in NOT documenting it.

“These are the kinds of things when we think about, what does it mean to just collect everything? Jean–François Blanchette writes about the social value of forgetting. There’s a reason why we forget, and it’s why juvenile records, for example, are sealed and don’t follow you into your future so you can have a chance at a future. What happens when we collect, when we use these new models that we’re developing, especially in educational contexts? I shudder to think that my 18-year-old self and the nonsense papers (quite frankly who’s writing a good paper when they’re 18) would follow me into my career? The private relationship of feedback and engagement that I’m trying to have with the faculty that taught me over the course of my career or have taught you over the course of your career, the experimentation with ideas that you can only do in that type of exchange between you and your instructor, the person you’re learning from, that being digitized and put into a system, a system that in turn could be commercialized and sold at some point, and then being data mineable. These are the kinds of real projects that are happening right now.”

We are now thinking a lot about how to help students and teachers better understand how our digital technology tools work, how we should  balance the cost of using technology to help learners with the potential problem of hyper-datafication of saving everything and never letting a learner move past some of their history.

As we think through this tension, and other topics in the keynote, some of the questions that came up for us include:

  • What information is being collected from our students and their families/homes and why? Where does the information go?
  • Who is creating the app that is collecting the data? Are they connected to other programs/companies that can benefit from the data?
  •  What guidelines for privacy does the software company follow? FERPA/COPPA? Do there need to be more or updated standards? What policies aren’t yet in place that we need to protect students?
  • What kinds of data is being digitally documented that could still be available years after a student has graduated? How could that impact them in job searches? Or, what happens when our students, who have documented their whole lives digitally, want to run for public office?
  • There are well-documented protocols for destroying students’ physical work, so what documented protocols are in place for their digital work?
  • Are school devices (e.g., Chromebooks or iPads) that contain student sensitive data being shared? Are all devices wiped between school years?
    • Students clean out their desks and lockers at the end of the school year, should we be teaching them to clean out their devices?
    • Do students have an alternative to using software or devices if they or their families have privacy concerns? Should they?
  • Is someone in your district (or school) accountable for privacy evaluation, software selection, and responsible use?
    • How are teachers being taught what to look for and evaluate in software?

In future posts, we’ll cover some more of what Dr. Noble suggested based on her work including the following points she made:

  1. (Re)consider the effect of hyper-datafication
  2. Resist making issues of justice and ethics an afterthought or additive
  3. Protect vulnerable people (students) from surveillance and data profiling
  4. Fund critical digital media research, literacy programs, and education
  5. Curate the indexable web, create multiple paths to knowledge
  6. Reduce technology over-development and its impact on people and the planet
  7. Never give up on the right things for the planet and the people

Dr. Noble on stage at the Cyberlearning 2020 meeting.

Finally, some of us have already picked up a copy of Algorithms of Oppression: How Search Engines Reinforce Racism and if you read it, we would love to hear your thoughts about it. Tweet @CIRCLEducators. Also, let us know if you have questions or thoughts about the keynote and/or algorithms.

Abstract Binary Chip

Introduction to Artificial Intelligence in Education

By Sarah Hampton

As an avid fan of CIRCL and the wife of a programmer, it’s safe to say I’m somewhat of a technophile. I’m typically and happily an early adopter of ed tech. Even so, my initial reaction to artificial intelligence (AI) in education was somewhere between skeptical and antagonistic. Like many teachers I’ve talked with, I was concerned that using AI would weaken the human connection that’s so important for a healthy school environment. I was and remain concerned about equity and access issues around technology. I also have serious questions about my students’ privacy. However, as I started digging into what AI actually is (and isn’t), I realized that I should learn more about it so I can offer my voice as a teacher to the communities developing the tools they want us to use. Over the summer, with the CIRCL Educator team, I’ll be digging into AI. In a series of posts, I will share the most important, perspective changing, and exciting things I’ve learned about artificial intelligence and what it might mean for education. I hope you’ll join me and let me know your questions and concerns.

First, let’s clarify artificial intelligence. What is and isn’t AI?

Let’s start with defining AI as a machine doing something we formerly thought only humans could do. More specifically, though, AI is just a specific type of computer software. The difference between AI and the software you’re already familiar with is that it doesn’t follow a linear set of simple instructions. Instead, AI uses algorithms or rules that are set initially by the developer (a human) and then the AI builds a model when it runs through data. The AI continually fine-tunes the model as it encounters more data. That’s why some people say AI “learns” or “teaches itself.” It’s not learning like a human would, it’s able to build models that optimize for given criteria set in the algorithm. (For my math colleagues, think regressions/curve fitting on steroids.) The names AI or machine learning, which is a specific approach used in AI, make it sound like the software takes on a life of its own. That’s not true. As our English Language Arts colleagues could tell us, it’s just an example of anthropomorphism–ascribing human characteristics to a nonhuman object.

We’ll consider different types of AI in a future post. For now, we will say look at AI in two ways; on one hand, compared to prior types of software, AI is extremely sophisticated and capable of things we thought were unique to humans twenty years ago.

Let’s take an example you might be familiar with–Grammarly. (Full disclosure: I don’t use Grammarly routinely, but I decided to investigate after seeing their YouTube ad about 500 times and am guessing you may have seen those ads, too.) AI, like the type Grammarly uses, can “learn” what good writing looks like. It was trained about features of good writing by being shown hundreds of thousands of sentence pairs. In the pairs, one of the sentences was written poorly and one was a well-written target sentence. From these pairs, Grammarly “gained insight” on elements of good writing. However, while the AI learns, it doesn’t understand why a sentence is good like a human can. It can only recognize multiple detailed features or patterns that are part of the examples. Then, when the AI receives a new writing sample uploaded by someone, it can compare the new writing sample to the patterns it detected in the training examples to determine how closely the new writing sample matches the features in the training sentences. The AI provides guidance to the human writer by offering suggestions that would help the writing sample match the exemplary types of writing from the training.

That’s one high-level example for today. I have other projects to go through in later posts, but I want to go back to thinking about how we define artificial intelligence. A recent EdSurge article brought up a great point, “when educators have different concepts of what makes a piece of technology or tool ‘intelligent,’ it means that a variety of tools get lumped into the AI category—even if they aren’t technically ‘artificially intelligent.’” Let’s think about what’s typically considered artificial intelligence to start to define it. I say start to define it as the field of AI is rapidly changing, and folks in the field are still working on a more precise definition. I’m making a checklist to help us differentiate AI from other kinds of technology.

Checklist: Is it AI?

TechnologyIs it AI?Why?
Projector or Document CamNoThese are useful tools, and let us do smart things, but they’re more hardware than software.
Smart BoardNoThis is a mix of hardware and software, but the software doesn’t improve as you use it.
Basic robot like Ozobot or Dash and DotNoCool robots, but the software doesn’t “learn” over time. Other robots may learn, but not these.
LMS (e.g., Google Classroom, Schoology, Canvas)NoLMSs could support the use of AI software and present information adaptively or use it for grading assignments, but these do not currently implement AI.
IXLNoThis software does some interesting things that seem like they might be AI, but the software doesn’t improve as it interacts with more users.
Siri, Alexa, Ok Google, etc.YesThis software has been trained with lots and lots of voices so it can recognize yours. It also learns to recognize yours better over time.
Facial recognitionYesFace recognition technology is AI, but it is not considered to be very robust meaning that it can easily misidentify people.
Self-driving carYesAs the self-driving car takes more and more test drives, it gets better at driving.
Carnegie Learning’s MATHiaYesMATHia is Carnegie Learning’s online software program that deploys artificial intelligence to actually teach math. By providing targeted coaching and adapting to student thinking, MATHia mirrors a human tutor with more complexity and precision than any other math software.
GrammarlyYesGrammarly’s products are powered by an advanced system that combines rules, patterns, and artificial intelligence techniques like machine learning, deep learning, and natural language processing to improve your writing.
Adaptive Computer-based TestingMaybeMight or might not depending on the software. Stay tuned for more on this in a future post!

What’s up next? We have lots more to share, including sharing AI projects from CIRCL during the CIRCL Educators Summer of AI!  We’ll also tackle some of the big questions educators have about AI like:

  • When will AI matter to me? How could AI make teacher learning more relevant, valuable, or effective?
  • Should I be worried that AI will replace me? What is the ideal balance between human and machine?
  • What needs to be considered so AI can help teachers support different races, cultures, genders, and students with different abilities in ways without bias (or with less bias)?

I want to give Pati Ruiz, Judi Fusco, and Patti Schank a thank you for their thinking and help with this post. An additional thank you goes to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Tweet @CIRCLEducators and let us know if you have questions or thoughts about AI.