Abstract Binary Chip

Introduction to Artificial Intelligence in Education

By Sarah Hampton

As an avid fan of CIRCL and the wife of a programmer, it’s safe to say I’m somewhat of a technophile. I’m typically and happily an early adopter of ed tech. Even so, my initial reaction to artificial intelligence (AI) in education was somewhere between skeptical and antagonistic. Like many teachers I’ve talked with, I was concerned that using AI would weaken the human connection that’s so important for a healthy school environment. I was and remain concerned about equity and access issues around technology. I also have serious questions about my students’ privacy. However, as I started digging into what AI actually is (and isn’t), I realized that I should learn more about it so I can offer my voice as a teacher to the communities developing the tools they want us to use. Over the summer, with the CIRCL Educator team, I’ll be digging into AI. In a series of posts, I will share the most important, perspective changing, and exciting things I’ve learned about artificial intelligence and what it might mean for education. I hope you’ll join me and let me know your questions and concerns.

First, let’s clarify artificial intelligence. What is and isn’t AI?

Let’s start with defining AI as a machine doing something we formerly thought only humans could do. More specifically, though, AI is just a specific type of computer software. The difference between AI and the software you’re already familiar with is that it doesn’t follow a linear set of simple instructions. Instead, AI uses algorithms or rules that are set initially by the developer (a human) and then the AI builds a model when it runs through data. The AI continually fine-tunes the model as it encounters more data. That’s why some people say AI “learns” or “teaches itself.” It’s not learning like a human would, it’s able to build models that optimize for given criteria set in the algorithm. (For my math colleagues, think regressions/curve fitting on steroids.) The names AI or machine learning, which is a specific approach used in AI, make it sound like the software takes on a life of its own. That’s not true. As our English Language Arts colleagues could tell us, it’s just an example of anthropomorphism–ascribing human characteristics to a nonhuman object.

We’ll consider different types of AI in a future post. For now, we will say look at AI in two ways; on one hand, compared to prior types of software, AI is extremely sophisticated and capable of things we thought were unique to humans twenty years ago.

Let’s take an example you might be familiar with–Grammarly. (Full disclosure: I don’t use Grammarly routinely, but I decided to investigate after seeing their YouTube ad about 500 times and am guessing you may have seen those ads, too.) AI, like the type Grammarly uses, can “learn” what good writing looks like. It was trained about features of good writing by being shown hundreds of thousands of sentence pairs. In the pairs, one of the sentences was written poorly and one was a well-written target sentence. From these pairs, Grammarly “gained insight” on elements of good writing. However, while the AI learns, it doesn’t understand why a sentence is good like a human can. It can only recognize multiple detailed features or patterns that are part of the examples. Then, when the AI receives a new writing sample uploaded by someone, it can compare the new writing sample to the patterns it detected in the training examples to determine how closely the new writing sample matches the features in the training sentences. The AI provides guidance to the human writer by offering suggestions that would help the writing sample match the exemplary types of writing from the training.

That’s one high-level example for today. I have other projects to go through in later posts, but I want to go back to thinking about how we define artificial intelligence. A recent EdSurge article brought up a great point, “when educators have different concepts of what makes a piece of technology or tool ‘intelligent,’ it means that a variety of tools get lumped into the AI category—even if they aren’t technically ‘artificially intelligent.’” Let’s think about what’s typically considered artificial intelligence to start to define it. I say start to define it as the field of AI is rapidly changing, and folks in the field are still working on a more precise definition. I’m making a checklist to help us differentiate AI from other kinds of technology.

Checklist: Is it AI?

TechnologyIs it AI?Why?
Projector or Document CamNoThese are useful tools, and let us do smart things, but they’re more hardware than software.
Smart BoardNoThis is a mix of hardware and software, but the software doesn’t improve as you use it.
Basic robot like Ozobot or Dash and DotNoCool robots, but the software doesn’t “learn” over time. Other robots may learn, but not these.
LMS (e.g., Google Classroom, Schoology, Canvas)NoLMSs could support the use of AI software and present information adaptively or use it for grading assignments, but these do not currently implement AI.
IXLNoThis software does some interesting things that seem like they might be AI, but the software doesn’t improve as it interacts with more users.
Siri, Alexa, Ok Google, etc.YesThis software has been trained with lots and lots of voices so it can recognize yours. It also learns to recognize yours better over time.
Facial recognitionYesFace recognition technology is AI, but it is not considered to be very robust meaning that it can easily misidentify people.
Self-driving carYesAs the self-driving car takes more and more test drives, it gets better at driving.
Carnegie Learning’s MATHiaYesMATHia is Carnegie Learning’s online software program that deploys artificial intelligence to actually teach math. By providing targeted coaching and adapting to student thinking, MATHia mirrors a human tutor with more complexity and precision than any other math software.
GrammarlyYesGrammarly’s products are powered by an advanced system that combines rules, patterns, and artificial intelligence techniques like machine learning, deep learning, and natural language processing to improve your writing.
Adaptive Computer-based TestingMaybeMight or might not depending on the software. Stay tuned for more on this in a future post!

What’s up next? We have lots more to share, including sharing AI projects from CIRCL during the CIRCL Educators Summer of AI!  We’ll also tackle some of the big questions educators have about AI like:

  • When will AI matter to me? How could AI make teacher learning more relevant, valuable, or effective?
  • Should I be worried that AI will replace me? What is the ideal balance between human and machine?
  • What needs to be considered so AI can help teachers support different races, cultures, genders, and students with different abilities in ways without bias (or with less bias)?

I want to give Pati Ruiz, Judi Fusco, and Patti Schank a thank you for their thinking and help with this post. An additional thank you goes to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Tweet @CIRCLEducators and let us know if you have questions or thoughts about AI.

How to cite this work

CIRCL Educator posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: "Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)."

Suggested citation format: [Authors] ([Year]). [Title]. CIRCLEducators Blog. Retrieved from [URL]