Category Archives: AI in Education

Three Visionary Projects Using AI in Education

by Sarah Hampton

One of my favorite things about our Summer of AI is learning about cyberlearning projects and how they might benefit future students. In this post, I want to showcase three projects that caught my attention because they use AI in different ways and for different ages. When we began in June, I was thinking AI might be mostly about robots in STEM classes or general AIs like Siri or Alexa. But now, after learning about these three example projects and many more, I realize that the future might be more about specialized AIs giving teachers information and ways to personalize learning. Sometimes this is behind the scenes, like the first project I highlight. Sometimes, like the third project, a robot is used in Mandarin class (instead of in a technology class). Let us know what you think about these projects and their potential to change how you teach and learn @CIRCLeducators!

  1. Project:

Human/AI Co-Orchestration of Dynamically-Differentiated Collaborative Classrooms

""

Figure 1. Left: A teacher using Lumilo while her students work with Lynette, an ITS for equation solving, in class (from Holstein et al., 2018b); Right: A point-of-view screenshot through Lumilo.

“This project will create and demonstrate new technology that supports dynamically-differentiated instruction for the classroom of the future. This new vision centers on carefully-designed partnerships between teachers, students, and artificial intelligence (AI). AI-powered learning software will support students during problem-solving practice, providing either individual guidance (using standard intelligent tutoring technology) or guidance so students can effectively collaborate and tutor each other. These learning activities are constantly adjusted to fit each student’s needs, including switching between individual or collaborative learning. The teacher “orchestrates” (instigates, oversees, and regulates) this dynamic process. New tools will enhance the teacher’s awareness of students’ classroom progress. The goal is to have highly effective and efficient learning processes for all students, and effective “orchestration support” for teachers.”

Why I’m Interested:

  • Capitalizes on the strengths of students, teachers, and technology
  • Creatively addresses differentiation and individualized instruction
  • Promotes collaborative learning
  • Relevant for all subjects

Learn More:

http://kenholstein.com/JLA_CodesignOrchestration.pdf
Teacher smart glasses (Lumilo)

  1. Project:

Using Gaze in a Reading Tutor

Icon of person reading a book

“The big question the PIs are addressing in this project is how to unobtrusively track silent reading of novice readers so as to be able to use an intelligent tutoring system to aid reading comprehension…This pilot project builds on previous work in vision and speech technology, sensor fusion, machine learning, user modeling, intelligent tutors, and eye movements in an effort to identify the feasibility of using eye tracking techniques, along with other information collected from an intelligent reading tutor, to predict reading difficulties of novice/young readers.”

“The project’s most important potential broader impacts is in establishing a foundation for exploiting gaze input to build intelligent computing systems that can be used to help children with reading difficulties learn to read and read to learn.”

Why I’m Interested:

  • Targets reading comprehension which would help students in all subjects
  • Could decrease student frustration
  • May identify and intercept issues early translating to great academic gains over time
  • Interacts personally with all students simultaneously in ways one teacher could not
  • Allows for meaningful individual reading practice

Learn More:

Perhaps because this was a pilot program, no further information has been published. As a teacher looking toward the future and wanting to shape the conversation as it’s happening, I want to know more!  I want to know what happened during this exploratory project and how similar projects could build on their work.

  1. Project

Transforming World Language Education using Social Robotics

   Female looks at a humanoid robot that stands on a table Two people look at humanoid robot

Figure 2. Students interacting with RALL-E robots.

“The social robot being developed in this project is designed to act as a language partner for students learning a foreign language, in this case those learning Chinese. It augments classroom instruction, providing for the learner a robot companion to converse with. The hypothesis is that social robots can make interactions with language speakers more exciting and more accessible, especially for less commonly taught languages. The embodied robot is designed not only to converse with learners but also to point and nod and gesture at particular people and objects, helping to direct the attention of learners and interact socially with learners in ways that a non-embodied simulation cannot.”

Why I’m Interested:

  • Opens access for learning languages like Mandarin and Hindi that are spoken by hundreds of millions of people around the world but are not routinely offered in American schools
  • Could easily be used in formal and informal settings
  • Applies robotics beyond STEM subjects

Learn More:

https://circlcenter.org/interactive-robot-for-learning-chinese/

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

References

Holstein, K., McLaren, B. M., & Aleven, V. (2018b). Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. In C. Penstein Rosé, R. Martínez-Maldonado, U. Hoppe, R. Luckin, M. Mavrikis, K. Porayska-Pomsta, B. McLaren, & B. du Boulay (Eds.), Proceedings of the 19th International Conference on Artificial Intelligence in Education (AIED 2018), 27–30 June 2018, London, UK. (pp. 154–168). Springer, Cham. http://dx.doi.org/doi.org/10.1007/978-3-319-93843-1_12

A group of 5 people work at a table.

My Students are Deep, Complex, and Beautiful. Assessments Should Be, Too.

by Sarah Hampton

I have a love/hate relationship with standardized assessments. I understand they’re necessary so we can get a picture of what our students are taking away from from their school experience. BUT, there are so many times when I find myself making decisions about my instruction based on how I think it will affect the summative assessment. Sometimes I am even choosing between holistic practices advocated by the standards themselves for learning and superficial practices I’ve seen work to improve test scores. Teachers are accountable to the test scores, so there is pressure to make decisions that give quick gain but may not result in long-term learning.

This conflict seems to be a common experience. When I talk to fellow educators about new pedagogies, tools, or curricula, one of the very first questions asked is always, “How will this affect my test scores?” Bottom line: high stakes assessments are still driving instructional decisions, for better or worse.

If the summative assessments were perfectly designed to really assess learning, then this would be ideal, right? If tests were truly an accurate reflection of everything we want our students to know for future learning, to be able to do, and help them be, then making instructional decisions that cause those scores to increase would mean our students were getting closer to those goals. However, I have a major bone to pick: Current standardized tests are an oversimplification of the depth, complexity, and beauty of my students.

I get that it’s important to prove that schools are doing what we say they’re doing. It’s important that we’re held accountable for all students learning in ways that benefit them. My problem? I don’t think current standardized tests are actually proving that students are learning. Most tests (even the adaptive ones I’ve seen) are still multiple choice, true/false, drop down, matching, or, occasionally, fill in the blank or short answer. The outcomes I want from my students simply cannot be properly evaluated that way! In addition, I shouldn’t feel pressured to sacrifice meaningful instructional activities that support my students’ complexity, depth, and beauty in order to make a shallow snapshot look better.

State and national assessments aren’t going away. I’m not even suggesting that they should. As a math and science teacher, I’m all about making informed decisions based on the right data. Unfortunately, as Pellegrino and Quellmalz note in Perspectives on the Integration of Technology and Assessment

“in the realm of high-stakes assessment for No Child Left Behind (NCLB) accountability, a number of regulatory, economic, and logistical issues have constrained the breadth and depth of the content and performance standards assessed in annual on-demand tests. Standard, multiple-choice item formats continue to dominate large-scale, computer-based, high-stakes testing, resulting in an over reliance on simple, highly structured problems that tap fact retrieval and the use of algorithmic solution procedures.”

So, because simple, highly structured problems are the things most current tests assess, teachers are unintentionally incentivized to promote surface level thinking around oversimplified problems.

What if there was a way to design an assessment that wasn’t constrained that way? What if assessments were made of messy problems and could honor multiple student pathways to solutions? What if they could analyze and reward student thinking on a topic? Better standardized assessments could be one possible way to reconcile what and how I want my students to learn with how they can express what they know.

Here comes the good news: the learning sciences community is integrating technology in ways that are rapidly improving assessments so they can give us a more complex and accurate picture of what our students know and do! That means that investing in the complexity of our students would translate to better scores because the assessment could give us a more comprehensive image of their understanding. It’s like the assessment world is heading into 3-d printing instead of 2-d black and white line drawings! No more tug of war between deep, meaningful instruction and what I feel like I have to do for the test because, now, they can be the same thing. Yay!

Consider My Science Tutor (MyST), a type of NSF funded project called an intelligent virtual tutor. MyST uses artificial intelligence to interpret and assess how well students understand science concepts by analyzing their dialog. The principal investigators of MyST say that “its goal is to assess student understanding of concepts rather than facts, which is very important to prepare students and the future workforce in STEM.” Student understanding of concepts rather than facts? That’s what I want!

Before advancements like the ones that make MyST a reality, a student’s answer on an assessment had to exactly match the correct answer designated by the programmer. That’s why standardized tests relied so heavily on multiple choice, T/F, etc. Now, machine learning allows computers like MyST to ‘learn’ what understanding a topic sounds like by analyzing speech from multiple dialog sessions–over 13,000 sessions for MyST, in fact. Then, it can analyze how well a new student has learned the concept based on what MyST ‘knows’. This is the essence of artificial intelligence: a machine doing what was previously thought to be unique to humans. In Towards artificial intelligence-based assessment systems, Rose Luckin says, “AI is a powerful tool to open up the ‘black box of learning’, by providing a deep, fine-grained understanding of when and how learning actually happens.”

This level of comprehension analysis was formerly only possible through one-on-one evaluation by teachers or researchers–an unrealistic burden on human time. Now, as Pellegrino and Quellmalz say:

“In addition to assessment of student knowledge and skills in highly structured problems with one right answer, technology can also support the design of complex, interactive tasks that extend the range of knowledge, skills, and cognitive processes that can be assessed.

The implications of this could transform the assessment landscape by allowing multiple computers to ‘listen’ to each student at once and check for genuine understanding. There’s something beautiful about that. Even more, when assessments can help teachers evaluate complex understanding using open-ended problems, teachers are incentivized to promote deep thinking of deep problems. My students are deep, complex, and beautiful. Assessments should be, too.

How about you? How do you resolve the ever present conflict between good teaching and standardized assessments? Do you think standardized assessments cover important issues? What do you think about creating new assessments using AI? Join the conversation by tweeting @CIRCLEducators.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Five CIRCL Educators stand next to a Cyberlearning 2019 banner

Harnessing Educational Data: Discussing Dr. Safiya Noble’s Keynote from Cyberlearning 2019

By Pati Ruiz, Sarah Hampton, Judi Fusco, Amar Abbott, and Angie Kalthoff

In October 2019 the CIRCL Educators gathered in Alexandria, Virginia for Cyberlearning 2019: Exploring Contradictions in Achieving Equitable Futures (CL19). For many of us on the CIRCL Educators’ team it was the first opportunity for us to meet in person after working collaboratively online for years. In addition, CL19 provided us with opportunities to explore learning in the context of working with technology and meet with researchers with diverse expertise and perspectives. We explored the tensions that arise as research teams expand the boundaries of learning, and explored how cyberlearning research might be applied in practice.

One of the topics, we thought a lot about at CL19, is algorithms. We had the opportunity to hear from keynote speaker Safiya Noble, an Associate Professor at UCLA, and author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press). In her Keynote, The Problems and Perils of Harnessing Big Data for Equity & Justice, Dr. Noble described the disturbing findings she uncovered when she started investigating algorithms related to search. She was not satisfied with the answer that the way algorithms categorized people, particularly girls of color, was what “the public” wanted. She dug in deeper and what she said really made us think.

This keynote is related to some of the conversations we’re having about Artificial Intelligence (AI), so we decided to re-watch the recorded version and discuss the implications of harnessing Big Data for students, teachers, schools, and districts. Big Data is crucial in much work related to AI. Algorithms are crucial. We bring this into our series on AI because even though math and numbers seem like they are not culturally-biased, there are ways that they are and can be used to promote discrimination. In this post, we don’t summarize the keynote, but we tell you what really got us thinking. We encourage you to watch it too.

Besides discussing algorithms for search, Dr. Noble also discusses implications of technology, data, and algorithms in the classroom. For example, Dr. Noble shared how she breaks down how a Learning Management System works for her students so that they know how the technology they are using can inform their professors of how often and how long they log into the system (among other things). She said they were often surprised that their teachers could learn these things. She went on to say:

“These are the kinds of things that are not transparent, even to the students that many of us are working with and care about so deeply. “

Another idea that particularly resonated with us, as teachers, from the talk is the social value of forgetting. Sometimes there is value in digitally preserving data, but sometimes there is more value in NOT documenting it.

“These are the kinds of things when we think about, what does it mean to just collect everything? Jean–François Blanchette writes about the social value of forgetting. There’s a reason why we forget, and it’s why juvenile records, for example, are sealed and don’t follow you into your future so you can have a chance at a future. What happens when we collect, when we use these new models that we’re developing, especially in educational contexts? I shudder to think that my 18-year-old self and the nonsense papers (quite frankly who’s writing a good paper when they’re 18) would follow me into my career? The private relationship of feedback and engagement that I’m trying to have with the faculty that taught me over the course of my career or have taught you over the course of your career, the experimentation with ideas that you can only do in that type of exchange between you and your instructor, the person you’re learning from, that being digitized and put into a system, a system that in turn could be commercialized and sold at some point, and then being data mineable. These are the kinds of real projects that are happening right now.”

We are now thinking a lot about how to help students and teachers better understand how our digital technology tools work, how we should  balance the cost of using technology to help learners with the potential problem of hyper-datafication of saving everything and never letting a learner move past some of their history.

As we think through this tension, and other topics in the keynote, some of the questions that came up for us include:

  • What information is being collected from our students and their families/homes and why? Where does the information go?
  • Who is creating the app that is collecting the data? Are they connected to other programs/companies that can benefit from the data?
  •  What guidelines for privacy does the software company follow? FERPA/COPPA? Do there need to be more or updated standards? What policies aren’t yet in place that we need to protect students?
  • What kinds of data is being digitally documented that could still be available years after a student has graduated? How could that impact them in job searches? Or, what happens when our students, who have documented their whole lives digitally, want to run for public office?
  • There are well-documented protocols for destroying students’ physical work, so what documented protocols are in place for their digital work?
  • Are school devices (e.g., Chromebooks or iPads) that contain student sensitive data being shared? Are all devices wiped between school years?
    • Students clean out their desks and lockers at the end of the school year, should we be teaching them to clean out their devices?
    • Do students have an alternative to using software or devices if they or their families have privacy concerns? Should they?
  • Is someone in your district (or school) accountable for privacy evaluation, software selection, and responsible use?
    • How are teachers being taught what to look for and evaluate in software?

In future posts, we’ll cover some more of what Dr. Noble suggested based on her work including the following points she made:

  1. (Re)consider the effect of hyper-datafication
  2. Resist making issues of justice and ethics an afterthought or additive
  3. Protect vulnerable people (students) from surveillance and data profiling
  4. Fund critical digital media research, literacy programs, and education
  5. Curate the indexable web, create multiple paths to knowledge
  6. Reduce technology over-development and its impact on people and the planet
  7. Never give up on the right things for the planet and the people

Dr. Noble on stage at the Cyberlearning 2020 meeting.

Finally, some of us have already picked up a copy of Algorithms of Oppression: How Search Engines Reinforce Racism and if you read it, we would love to hear your thoughts about it. Tweet @CIRCLEducators. Also, let us know if you have questions or thoughts about the keynote and/or algorithms.

Abstract Binary Chip

Introduction to Artificial Intelligence in Education

By Sarah Hampton

As an avid fan of CIRCL and the wife of a programmer, it’s safe to say I’m somewhat of a technophile. I’m typically and happily an early adopter of ed tech. Even so, my initial reaction to artificial intelligence (AI) in education was somewhere between skeptical and antagonistic. Like many teachers I’ve talked with, I was concerned that using AI would weaken the human connection that’s so important for a healthy school environment. I was and remain concerned about equity and access issues around technology. I also have serious questions about my students’ privacy. However, as I started digging into what AI actually is (and isn’t), I realized that I should learn more about it so I can offer my voice as a teacher to the communities developing the tools they want us to use. Over the summer, with the CIRCL Educator team, I’ll be digging into AI. In a series of posts, I will share the most important, perspective changing, and exciting things I’ve learned about artificial intelligence and what it might mean for education. I hope you’ll join me and let me know your questions and concerns.

First, let’s clarify artificial intelligence. What is and isn’t AI?

Let’s start with defining AI as a machine doing something we formerly thought only humans could do. More specifically, though, AI is just a specific type of computer software. The difference between AI and the software you’re already familiar with is that it doesn’t follow a linear set of simple instructions. Instead, AI uses algorithms or rules that are set initially by the developer (a human) and then the AI builds a model when it runs through data. The AI continually fine-tunes the model as it encounters more data. That’s why some people say AI “learns” or “teaches itself.” It’s not learning like a human would, it’s able to build models that optimize for given criteria set in the algorithm. (For my math colleagues, think regressions/curve fitting on steroids.) The names AI or machine learning, which is a specific approach used in AI, make it sound like the software takes on a life of its own. That’s not true. As our English Language Arts colleagues could tell us, it’s just an example of anthropomorphism–ascribing human characteristics to a nonhuman object.

We’ll consider different types of AI in a future post. For now, we will say look at AI in two ways; on one hand, compared to prior types of software, AI is extremely sophisticated and capable of things we thought were unique to humans twenty years ago.

Let’s take an example you might be familiar with–Grammarly. (Full disclosure: I don’t use Grammarly routinely, but I decided to investigate after seeing their YouTube ad about 500 times and am guessing you may have seen those ads, too.) AI, like the type Grammarly uses, can “learn” what good writing looks like. It was trained about features of good writing by being shown hundreds of thousands of sentence pairs. In the pairs, one of the sentences was written poorly and one was a well-written target sentence. From these pairs, Grammarly “gained insight” on elements of good writing. However, while the AI learns, it doesn’t understand why a sentence is good like a human can. It can only recognize multiple detailed features or patterns that are part of the examples. Then, when the AI receives a new writing sample uploaded by someone, it can compare the new writing sample to the patterns it detected in the training examples to determine how closely the new writing sample matches the features in the training sentences. The AI provides guidance to the human writer by offering suggestions that would help the writing sample match the exemplary types of writing from the training.

That’s one high-level example for today. I have other projects to go through in later posts, but I want to go back to thinking about how we define artificial intelligence. A recent EdSurge article brought up a great point, “when educators have different concepts of what makes a piece of technology or tool ‘intelligent,’ it means that a variety of tools get lumped into the AI category—even if they aren’t technically ‘artificially intelligent.’” Let’s think about what’s typically considered artificial intelligence to start to define it. I say start to define it as the field of AI is rapidly changing, and folks in the field are still working on a more precise definition. I’m making a checklist to help us differentiate AI from other kinds of technology.

Checklist: Is it AI?

TechnologyIs it AI?Why?
Projector or Document CamNoThese are useful tools, and let us do smart things, but they’re more hardware than software.
Smart BoardNoThis is a mix of hardware and software, but the software doesn’t improve as you use it.
Basic robot like Ozobot or Dash and DotNoCool robots, but the software doesn’t “learn” over time. Other robots may learn, but not these.
LMS (e.g., Google Classroom, Schoology, Canvas)NoLMSs could support the use of AI software and present information adaptively or use it for grading assignments, but these do not currently implement AI.
IXLNoThis software does some interesting things that seem like they might be AI, but the software doesn’t improve as it interacts with more users.
Siri, Alexa, Ok Google, etc.YesThis software has been trained with lots and lots of voices so it can recognize yours. It also learns to recognize yours better over time.
Facial recognitionYesFace recognition technology is AI, but it is not considered to be very robust meaning that it can easily misidentify people.
Self-driving carYesAs the self-driving car takes more and more test drives, it gets better at driving.
Carnegie Learning’s MATHiaYesMATHia is Carnegie Learning’s online software program that deploys artificial intelligence to actually teach math. By providing targeted coaching and adapting to student thinking, MATHia mirrors a human tutor with more complexity and precision than any other math software.
GrammarlyYesGrammarly’s products are powered by an advanced system that combines rules, patterns, and artificial intelligence techniques like machine learning, deep learning, and natural language processing to improve your writing.
Adaptive Computer-based TestingMaybeMight or might not depending on the software. Stay tuned for more on this in a future post!

What’s up next? We have lots more to share, including sharing AI projects from CIRCL during the CIRCL Educators Summer of AI!  We’ll also tackle some of the big questions educators have about AI like:

  • When will AI matter to me? How could AI make teacher learning more relevant, valuable, or effective?
  • Should I be worried that AI will replace me? What is the ideal balance between human and machine?
  • What needs to be considered so AI can help teachers support different races, cultures, genders, and students with different abilities in ways without bias (or with less bias)?

I want to give Pati Ruiz, Judi Fusco, and Patti Schank a thank you for their thinking and help with this post. An additional thank you goes to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Tweet @CIRCLEducators and let us know if you have questions or thoughts about AI.