Category Archives: AI in Education

Assessment Webinar Resources

Educators, Artificial Intelligence, and the Future of Learning

Watch the recording.

AI products discussed:

Meet the Practitioner Panelists:

Aaron Hawn HeadshotAaron Hawn
Research Affiliate, The Penn Center for Learning Analytics
Co-Founder and Managing Member, Thirteen Ways Consulting, LLC
Twitter: @hawn_aaron

Aaron’s Statement on AI:

As a former teacher/testing administrator and current researcher, I see the potential for AI to rewire schools’ relationship with assessment and accountability, winning back time for instruction, trust in results, and opening new windows onto student skills. At the same time, I see that potential passing us by if teachers, education leadership, communities, and students are not engaged as partners from the start, designing AI tools for how real classrooms work and towards impacts that matter.

Nancy Foote HeadshotNancy Foote
Conceptual Physics Teacher,
Higley Unified School District (AZ)
Twitter: @MrsFoote

Nancy Foote, MEd, is currently a Conceptual Physics teacher in Gilbert AZ. She worked as an Industrial Chemist for the Sherwin Williams Company before obtaining her Master’s degree and teacher certification. Nancy has been in education for more than 30 years as a teacher, principal, staff development coordinator, teacher on special assignment. and curriculum coach. A National Board Certified Teacher, Nancy is also a recipient of the Presidential Award for Excellence in Mathematics and Science Teaching.

Nancy’s Statement on AI:

Before I met Inq-ITS, and through them AI, I was floundering in the dark. I was trying to grade hundreds of lab reports, trying to determine who understood what, how to intervene when necessary, and how to help my students think like scientists. That wasn’t even taking the quality of the writing into consideration. Now, thanks to Inq-ITS and their masterful use of AI, I can be a teacher again. I can intervene at the perfect time. I can help students exactly when they need it with the intervention that they need. I have become a mind reader. Most importantly, my students are thinking like scientists.

Data Science in Ambitious Mashups

by Merijke Coenraad

This post will focus on Trends at NSF and Beyond: Data Science.

No matter what subject you teach, it is likely that data comes into play in your classroom. Whether it is statistical analysis of data in math, collecting and analyzing data in science, or analyzing historical and contemporary data to understand the present, data can be found in many classroom activities. A trend within CIRCL projects was harnessing the data revolution. With more and more data collected each day and the accessibility of data for you and your students, there are many ways you can learn from these projects and bring them into play within your classroom. For example, check out these two projects: STEM Literacy through Infographics and Data Clubs.

STEM Literacy Through Infographics

Created by a team of researchers from across the US, STEM Literacy Through Infographics focuses on helping students create infographics in both classroom and out of school settings to help students make visual, argumentative claims (watch a 3-minute video about the project). The project aims to provide students with the skills they need to ask and answer questions about their own lives and communicate that data to others through data journalism. This ambitious project brings together experts in educational technology, mathematics, and learning and mashes up data science, data visualization, and citizen science opportunities to help students make sense of the data that is available to them. If you’re interested, you can try out infographics in your classroom using their helpful step by step guide “How to Make Infographics”, classroom lesson plans and resources, and asynchronous professional development workshop.

Data Clubs

Data Clubs are designed by a team of researchers at TERC and SCIEDS to provide middle school students with data skills that allow them to visualize and analyze data about life-relevant topics. Within the lessons, students write their own questions, collect their own data, and learn how to represent their data by hand and using computer software. This ongoing ambitious project uses design-based research collecting data about students’ data dispositions and interviewing them about their experiences. It mashes up mathematics, informal learning, data visualization, and statistics to help students think about the who, when where, how, and why of data. Try out the currently available modules with your students!

These projects demonstrate the importance of quality data experiences for students and the role that data visualization can play in students learning from the large data sets that are available to them. Besides trying out materials from these projects, how can you use data science in your classroom? Here are some ideas:

  • Explore infographics on Information is Beautiful and have students create their own by hand (as seen in Dear Data) or using a computer program.
  • Engage students with visualization of climate change on Climate.org run by NOAA. The platform provides a number of data visualization environments in which students can explore climate data.
  • Explore the Johns Hopkins US or World Coronavirus maps to discuss current trends (click on counties to see more specific data)
  • Explore data visualization of the 2020 election from Statista or CISA to discuss trends in voting and the role that data visualizations play in data communication (consider showing this clip of CNN analyzers using live data visualizations to discuss visualizations in election reporting)
  • Allow students to use data from Our World in Data and/or the CODAP platform to explore data and create their own visualizations
  • Lead students in a discussion about data collection in their lives and the amount of data collected from their use of social media, online shopping, and other internet-connected activities. Provide students with the opportunity to critically analyze how companies are making money off of their data collection and what they could do to advocate for and protect themselves from harmful data use.

How do you use data in your classroom? Tweet us @EducatorCIRCLS and tell us about your innovative technology use and stay tuned for future blogs in this series about CIRCL Ambitious Mashups.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

Introduction to Ambitious Mashups

by Merijke Coenraad

As an educator, you know better than anyone else how much educational technology is changing, particularly within the last year. The Center for Innovative Research in Cyberlearning (CIRCL) has worked with researchers for the last eight years as they have developed and investigated learning environments and technology that pushed the boundaries between technology and education. This community of researchers (and their partner teachers) has focused on how emerging technologies could be important for learners and adapted into the learning tools that would positively impact the education of students five or ten years later.

The recent Ambitious Mashups report examines the work researcher teams did. You might have also seen our previous post encouraging you to attend the Ambitious Mashups Webinar. In case you missed it live, you can watch it here. As we reviewed all of the projects, we discovered that they together researchers with computer science expertise, knowledge of learning sciences theories and methods, and a firm commitment to investigating equity. More than just focusing on emerging educational technologies, CIRCL projects had a strong focus on groups that are marginalized within society and underrepresented in STEM professions such as students from marginalized races, girls, low-performing schools, low-income settings, students with disabilities, and students who are learning English.

Looking across projects, the CIRCL team found that researchers weren’t concerned with just one technology, research method, or learning theory. These projects were ambitious, pushing the frontiers of research and technology and studying big learning goals, and they were interdisciplinary mashups, involving many elements together in novel integrations. Therefore, we have deemed the results of CIRCL to be ambitious mashups and worthy of review by not only researchers, but by educators as well. These ambitious mashups bring together a set of novel technologies in unimagined ways to tackle learning challenges. As educators who will soon be encountering these emerging technologies in the classroom, this report points to what you can expect from ed tech and questions to start asking yourself as the research ambitious mashups of the past eight years become the technologies of the next decade.

So, what did we learn from looking at all the cyberlearning research? Reviewing the research projects completed through CIRCL, the team identified five themes representing the elements of the cyberlearning research community:

In this series of posts, we are going to look across some of these themes because we at CIRCL Educators believe that there are many things to think about as the emerging technologies of cyberlearning begin to enter the classroom and there are already exciting findings that can influence your teaching!

After eight years of researching together, the CIRCL community has learned a lot about what it means to do innovative research at the forefront of educational technology. Being a CIRCL Educator, what have you learned? How can you create an ambitious mashup in your classroom? Tweet us @EducatorCIRCLS and tell us about your innovative technology use and stay tuned for future blogs in this series about CIRCL Ambitious Mashups.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

AI and Formative Assessment

by Sarah Hampton

In my last post, I talked about effective formative assessments and their powerful impact on student learning. In this post, let’s explore why AI is well-suited for formative assessment.

  1. AI can offer individualized feedback on specific content.
  2. AI can offer individualized feedback that helps students learn how to learn.
  3. AI can provide meaningful formative assessment outside of school.
  4. AI might be able to assess complex and messy knowledge domains.

Individualized Feedback on Content Learning

I think individualized feedback is the most powerful advantage of AI for assessment. As a teacher, I can only be in one place at a time looking in one direction at a time. That means I have two choices for feedback: I can take some time to assess how each student is doing and then address general learning barriers as a class, or I can assess and give feedback to students one at a time. In contrast, AI allows for simultaneous individualized feedback for each student.

“AI applications can identify pedagogical materials and approaches adapted to the level of individual students, and make predictions, recommendations and decisions about the next steps of the learning process based on data from individual students. AI systems assist learners to master the subject at their own pace and provide teachers with suggestions on how to help them.” (Trustworthy artificial intelligence (AI) in education: promises and challenges)

Going one step further, AI has the ability to assess students without disrupting their learning by something called stealth assessment. While students work, AI can quietly collect data in the background such as the time it takes to answer questions, which incorrect strategies they tried before succeeding, etc. and organize them into a dashboard so teachers can use that data to inform what to focus on or clear up the next day in class. Note: As a teacher, I want the AI to help me do what I do best. I definitely want to see what each student needs in their learning. Also, as a teacher, I want to be able to control when the AI should alert me about intervening (as a caring human) instead of it trying to do something on its own that it isn’t capable of doing well.

Feedback That Helps Students Learn How to Learn

“Two experimental research studies have shown that students who understand the learning objectives and assessment criteria and have opportunities to reflect on their work show greater improvement than those who do not (Fontana & Fernandes, 1994; Frederikson & White, 1997).” (The Concept of Formative Assessment)

In the last post, I noted that including students in the process of self-assessment is critical to effective formative assessment. After all, we ultimately want students to be able to self-regulate their own learning. But, as one teacher, it can sometimes be difficult to remind students individually to stop and reflect on their work and brainstorm ways to close the gap between their current understanding and their learning goal. By contrast, regulation prompts can be built into AI software so students routinely stop and check for understanding and defend their reasoning, giving students a start on learning how to self-regulate.

For example, this is done in Crystal Island, an AI game-based platform for learning middle school microbiology, “students were periodically prompted to reflect on what they had learned thus far and what they planned to do moving forward…Students received several prompts for reflection during the game. After completing the game or running out of time, students were asked to reflect on their problem-solving experience as a whole, explaining how they approached the problem and whether they would do anything differently if they were asked to solve a similar problem in the future.” (Automated Analysis of Middle School Students’ Written Reflections During Game-Based Learning)

In-game reflection prompt presented to students in Crystal Island

Meaningful Formative Assessment Outside of School

Formative assessment and feedback can come from many sources, but, traditionally, the main source is the teacher. Students only have access to their teacher inside the classroom and during class time. In contrast, AI software can provide meaningful formative assessment anytime and anywhere which means learning can occur anytime and anywhere, too.

In the next post, we’ll look at how one AI tool, ASSISTments, is using formative assessment to transform math homework by giving meaningful individualized feedback at home.

Assessing Complexity and Messiness

In the first post of the series, I discussed the need for assessments that can measure the beautiful complexity of what my students know. I particularly like the way Griffin, McGaw, and Care state it in Assessment and Teaching of 21st Century Skills:

“Traditional assessment methods typically fail to measure the high-level skills, knowledge, attitudes, and characteristics of self-directed and collaborative learning that are increasingly important for our global economy and fast-changing world. These skills are difficult to characterize and measure but critically important, more than ever. Traditional assessments are typically delivered via paper and pencil and are designed to be administered quickly and scored easily. In this way, they are tuned around what is easy to measure, rather than what is important to measure.”

We have to have assessments that can measure what is important and not just what is easy. AI has the potential to help with that.

For example, I can learn more about how much my students truly understand about a topic from reading a written response than a multiple choice response. However, it’s not possible to frequently assess students this way because of the time it takes to read and give feedback on each essay. (Consider some secondary teachers who see 150+ students a day!)

Fortunately, one major area for AI advancement has been in natural language processing. AIs designed to evaluate written and verbal ideas are quickly becoming more sophisticated and useful for providing helpful feedback to students. That means that my students could soon have access to a more thorough way to show what they know on a regular basis and receive more targeted feedback to better their understanding.

While the purpose of this post is to communicate the possible benefits of AI in education, it’s important to note that my excitement about these possibilities is not a carte blanche endorsement for them. Like all tools, AI has the potential to be used in beneficial or nefarious ways. There is a lot to consider as we think about AI and we’re just starting the conversation.

As AI advances and widespread classroom implementation becomes increasingly more possible, it’s time to seriously listen to those at the intersection of the learning sciences and artificial intelligence like Rose Luckin. “Socially, we need to engage teachers, learners, parents and other education stakeholders to work with scientists and policymakers to develop the ethical framework within which AI assessment can thrive and bring benefit.” (Towards artificial intelligence-based assessment systems)

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

We are still at the beginning of our conversation around AI in Education. What do you think? Do the possible benefits excite you? Do the possible risks concern you? Both? Let us know @EducatorCIRCLS.

Educator CIRCLS posts are licensed under a Creative Commons Attribution 4.0 International License. If you use content from this site, please cite the post and consider adding: “Used under a Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).”
Suggested citation format: [Authors] ([Year]). [Title]. Educator CIRCLS Blog. Retrieved from [URL]

ASSISTments: A Forward-thinking Formative Assessment AI Ready to Use in Your Classroom Today

by Sarah Hampton

In my last post, I discussed the ways AI can enhance formative assessment. In this post, let’s take a look at the AI example I’m most excited about and how it’s already benefited 11,000 teachers!

ASSISTments seems both practical and forward thinking, a unique combination. Sometimes it can be frustrating getting excited about new technologies when they’re still in development and not yet ready for the classroom. But, unlike many cutting edge projects I read about, ASSISTments is ready to implement now.

In their own words, “ASSISTments is more than an assessment tool. It improves the learning journey, translating student assessment to skill building and mastery, guiding students with immediate feedback, and giving teachers the time and data insights to drive every step of their lesson in a meaningful way.”

ASSISTments works through a 4 step process to help you get started:

  1. Create assignments.
    Teachers select questions from existing or custom question banks. I was really impressed with the number and variety of sets already on the site. There are question sets from select open educational resources, textbook curricula, and released state tests ready to be assigned. There are also pre-made general skill-building and problem-solving sets. Note, everything the students see is assigned by you, the teacher.
  2. Assist students through immediate feedback.
    As students complete their assigned problems, they might receive hints and explanations to help them understand. Check out these screenshots of the platform. (See more in The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching)
  3. buggy message: No. You might be thinking that the area is one half base times height, but you are looking for the perimeter.

    Figure: An ASSISTments message shown just before the student hits the “done” button, showing two different hints and one buggy message that can occur at different points.

    Students immediately know if they’re right or wrong and can answer multiple times for partial credit, and, at the end of each assignment, each student receives an outcome report detailing their performance.

  4. Assess class performance.
    Data is also available to the teacher. Check out how easy they make it for teachers to gauge student progress.
  5. Symbols of Success. Quickly assess student and class performance using these symbols of success on your assignment report. Four symbols are Green check mark, green X, red X, red X with highlight.

    Figure. An easy way to visualize student performance.

    Figure. A popular ASSISTments report organizes student homework results in a grid–with tasks in columns and students in rows and enables teachers to quickly identify which problems to review and what the common errors were, as indicated by the annotations.

  6. Analyze answers together (with your students).
    After teachers see which problems were routinely missed, class time can be spent on the most needed concepts. As the ASSISTments site says, “Homework and classwork flow seamlessly into instruction of new material.” You can use the information you gain from the reports to determine what you will cover the next day. If everyone gets a concept you can move on and not waste valuable class time covering material that is understood. ASSISTments can also help support groups or personalized work.

This four-step process models what needs to happen in effective formative assessment, which was discussed in the second post of this series. Students engage in an assessment for learning (in this case it’s their homework), receive specific, supportive, timely, and focused feedback on how to close the gap between their current and desired understanding, and the results of the assessment are used to drive the next learning encounter.

Based on the undergirding principles of formative assessment, it’s no surprise that ASSISTments meets the rigorous What Works Clearinghouse standards without reservation, and receives a strong rating as an evidence-based PK-12 program by Evidence for ESSA. Based on a randomized controlled trial examining 2,728 seventh grade students in Maine, on average, the use of ASSISTments “produced a positive impact on students’ mathematics achievement at the end of a school year” equivalent to a student at the 50th percentile without the intervention improving to the 58th percentile with it. In addition, as seen in other formative assessment studies, the largest gains were seen by students with low prior achievement. (Online Mathematics Homework Increases Student Achievement) ASSISTments helps you by helping the students who need it the most and seems to allow you to be in multiple places at once!

One of the reasons I’m so excited about this program is because it was thoughtfully designed with teachers and students in mind. Neil and Cristina Heffernen, the co-creators of ASSISTments, write this in The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching.

“In many ways the list of problem sets in ASSISTments is a replacement for the assignment from the textbook or the traditional worksheet with questions on it. This way the teachers do not have to make a drastic change to their curriculum in order to start using the system. But more importantly they can make more sense of the data they get back since they are the ones who selected and assigned the problems.This is in contrast to the idea of an artificial intelligence automatically deciding what problem is best for each student. While this is a neat idea, it takes the teacher out of the loop and makes the computer tutorial less relevant to what is going on in the classroom.”

Exactly! I want formative assessment–in and out of the classroom–to meaningfully guide my instruction. Furthermore, I really appreciate that ASSISTments was designed to give teachers assistance in the workflow, to inform them about what students are learning, and, more importantly, not learning, so that teachers can make an informed decision on how to best help their students. I hope including teachers in the design process and helping teachers work more effectively with their students becomes a standard for educational AIs.

You need a school verified Google Classroom or paid Canvas account to use it, but ASSISTments itself is free! Unfortunately, our school uses a basic Canvas account, but customer service at ASSISTments allowed me to have a teacher role using a personal account so I could fully explore the program. I’m hopeful that this can be a transformative homework solution for math students! I think it will be worth your time to see what ASSISTments can offer you.

Note, I am not affiliated with ASSISTments and was not paid or asked to write about ASSISTments. I learned about it from CIRCL, and I was intrigued because I teach mathematics, but everything I discovered about it was through my research and my excitement about its potential is my own.

Watch this short video to learn more about ASSISTments, and read more about co-creator Neil Heffernen in his CIRCL Perspective.

Thank you to ASSISTments’ co-creator Cristina Hefferenen and to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Ambitious Mashups and CIRCLS

By CIRCL Educators

CIRCL, the Center for Innovative Research in Cyberlearning, has come to an end, but don’t worry, we’re getting ready to roll over to a new project called CIRCLS, the Center for Integrative Research in Computing and Learning Sciences. Stay tuned here and we’ll keep you apprised of any changes. Of course we’ll still be working to bridge practice and research and share what CIRCLS is doing and what we, as educators, are thinking about and facing in our work. If you’d like to get more involved with our work, please contact us! We’re looking for more educators to think and write with.

In the meantime, before we transition to CIRCLS, we want to dive into the final report from CIRCL. In it, we reflect on what we’ve learned since 2013 when CIRCL started. The world and technology have both changed quite a bit. Over the years, CIRCL worked with the approximately 450 projects funded by the National Science Foundation through their Cyberlearning program. The term Cyberlearning is a hard word to grasp, but the program and the projects in it were about using what we know about how people learn and creating new design possibilities for learning with emerging technology. In addition, in a 2017 report, we noted a strong commitment to equity in the CIRCL community. That commitment continues and is discussed in our final report with recommendations for future work to strengthen this important theme.

One thing we were struck by, in the review of the projects, was that there were many innovative designs to enhance learning with technology. As we tried to categorize the projects, we noticed that most contained combinations of multiple technologies, learning theories, and methods. While this may sound confusing, these combinations were purposefully designed to help augment learning and deepen our understanding of the technologies and how people learn. We looked for a term to use to explain this phenomenon and couldn’t find one, so we came up with a new one: Ambitious Mashups. In addition to the importance of mashing things up, the report also discusses:

Next week, we’ll be part of a webinar and talk through the different sections of the report. The webinar welcomes practitioners who want to learn more about research on emerging technologies from NSF-funded projects. While the projects aren’t always ready for use in a school today they offer ideas for new projects and new ways to think about how to use technology to support learning. The ambitious mashup projects think about learning in different ways and show how grounding activities in what we know about how people learn can help meet learning goals and outcomes. Ambitious mashups are usually exciting and give new ideas. CIRCL Educator Sarah Hampton says CIRCL reports can “help you get excited about the future landscape of education.”

We invite you to join us to learn more about Ambitious Mashups and Reflections on a Decade of Cyberlearning Research Webinar
Date: 10/28/2020
Time: 4 pm Eastern / 3 pm Central / 1 pm Pacific

Register

 


 

Book Review: You Look Like a Thing and I Love You

By Judi Fusco

During CIRCL Educators’ Summer of Artificial Intelligence (AI), I read the book You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place1, by Dr. Janelle Shane. I got the recommendation for it from fellow CIRCL Educator, Angie Kalthoff.

I found the book helpful even though it is not about AI in education. I read and enjoyed the e-book and the audio version. As I started writing this review, I was driving somewhere with one of my teenagers and I asked if we could listen to the book. She rolled her eyes but was soon laughing out loud as we listened. I think that’s a great testament to how accessible the book is.

Teaching an AI

Many of us use AI products like Siri or Alexa, on a regular basis. But how did they get “smart?” In the book, Dr. Shane writes about the process of training machine learning2, systems to be “intelligent”. She tells us how they certainly don’t start smart. Reading about the foibles, flailings, and failings that she has witnessed in her work helped me understand why it is so important to get the training part right and helped me understand some of what needs to be considered as new products are developed.

Dr. Shane starts out comparing machine learning and rule-based AI systems, which are two very different types of AI systems. Briefly, a rule-based system uses rules written by human programmers as it works with data to make decisions. By contrast, a machine learning algorithm3 is not given rules. Instead, humans pick an algorithm, give a goal (maybe to make a prediction or decision), give example data that helps the algorithm learn4, and then the algorithm has to figure out how to achieve that goal. Depending on the algorithm, they will discover their own rules (for some this means adjusting weights on connections between what is input and what they output). From the example data given to the algorithm, it “learns” or rather the algorithm improves what it produces through its experience with that data. It’s important to note that the algorithm is doing the work to improve and not a human programmer. In the book, Dr. Shane explains that after she sets up the algorithm with a goal and gives it training data she goes to get coffee and lets it work.

Strengths and Weaknesses

There are strengths and weaknesses in the machine learning approach. A strength is that as the algorithm tries to reach its goal, it can detect relationships and features of details that the programmer may not have thought would be important, or that the programmer may not even have been aware of. This can either be good or bad.

One way it can be good or positive is that sometimes an AI tries a novel solution because it isn’t bogged down with knowledge constraints of rules in the world. However, not knowing about constraints in the world can simultaneously be bad and lead to impossible ideas. For example, in the book, Dr. Shane discusses how in simulated worlds, an AI will try things that won’t work in our world because it doesn’t understand the laws of physics. To help the AI, a human programmer needs to specify what is impossible or not. Also, an AI will take shortcuts that may lead to the goal, but may not be fair. One time, an AI created a solution that took advantage of a situation. While it was playing a game, an AI system discovered there wasn’t enough RAM in the computer of its opponent for a specific move. The AI would make that move and cause the other computer to run out of RAM and then crash. The AI would then win every time. Dr. Shane discusses many other instances where an AI exploits a weakness to look like it’s smart.

In addition, one other problem we have learned from machine learning work, is that it highlights and exacerbates problems that it learns from training data. For example, much training data comes from the internet. Much of the data on the internet is full of bias. When biased data are used to train an AI, the biases and problems in the data become what guide the AI toward its goal. Because of this, our biases, found on the internet, become perpetuated in the decisions the machine learning algorithms make. (Read about some of the unfair and biased decisions that have occurred when AI was used to make decisions about defendants in the justice system.)

Bias

People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)

In the book, Dr. Shane says, machine learning occurs in the AI algorithms by “copying humans” — the algorithms don’t find the “best solution” or an unbiased one, they are seeking a way to do “what the humans would have done” (p 24) in the past because of the data they use for training. What do you think would happen if an AI were screening job candidates based on how companies typically hired in the past? (Spoiler alert: hiring practices do not become less discriminatory and the algorithms perpetuate and extend biased hiring.)

A related problem comes about because machine learning AIs make their own rules. These rules are not explicitly stated in some machine learning algorithms so we (humans aka the creators and the users) don’t always know what an AI is doing. There are calls for machine learning to write out the rules it creates so that humans can understand them, but this is a very hard problem and it won’t be easy to fix. (In addition, some algorithms are proprietary and companies won’t let us know what is happening.)

Integrating AIs into our lives

It feels necessary to know how a machine is making decisions when it is tasked with making decisions about people’s lives (e.g., prison release, hiring, and job performance). We should not blindly trust how AIs make decisions. AIs have no idea of the consequences of its decisions. We can still use them to help us with our work, but we should be very cautious about the types of problems we automate. We also need to ensure that the AI makes it clear what they are doing so that humans can review the automation, how humans can override decisions, and the consequences of an incorrect decision by an AI. Dr. Shane reminds us that an “AI can’t be bribed but it also can’t raise moral objections to anything it’s asked to do” (p. 4).

In addition, we need to ensure the data we use for training are as representative as possible to avoid bias, make sure that the system can’t take shortcuts to meet its goal, and we need to make sure the systems work with a lot of different types of populations (e.g., gender, racial, people with learning differences). AIso, an AI is not as smart as a human, in fact, Dr. Shane shares that most AI systems using machine learning (in 2019) have the approximate brainpower of a worm. Machine learning can help us automate tasks, but we still have a lot of work to do to ensure that AIs don’t harm or damage people. 

What are your thoughts or questions on machine learning or other types of AI in education? Tweet to @CIRCLEducators and be part of the conversation.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

See a recent TED Talk by author Janelle Shane.


Notes:

  1. Read the book to find out what the title means!
  2. Machine learning is one of several AI approaches.
  3. Machine Learning is a general term that also includes neural networks and the more specialized neural network class of Deep Learning. Note also, a famous class of ML algorithms that use rules are decision-tree algorithms.
  4. Some algorithms “learn” with labeled examples and some without, but that’s a discussion beyond the scope of this post.
red heart icon

Algorithms, Educational Data, and EdTech: Anticipating Consequences for Students

By Pati Ruiz and Amar Abbott

The 2020-2021 school year is underway in the U.S. and for many students, that means using edtech tools in a fully online or blended learning environment. As educators, it is our responsibility to consider how students are using edtech tools and what the unanticipated consequences of using these tools might be. Before introducing edtech tools to students, administrators should spend time considering a range of tools to meet the needs of their students and teachers. In a recent blog post, Mary Beth Hertz described the opportunities for anti-racist work in the consideration and selection of the tools students use for learning. Hertz identified a series of questions educators can ask about the tools they will adopt to make sure those tools are serving the best interest of all of their students. Two of the questions in Hertz’s list ask us to consider data and algorithms. In this post, we focus on these two questions and Hertz’s call to “pause and reflect and raise our expectations for the edtech companies with which we work while also thinking critically about how we leverage technology in the classroom as it relates to our students of color.” The two questions are:

  1. How does the company handle student data? and,
  2. Has the company tested its algorithms or other automated processes for racial biases?

To help us better understand the issues around these two questions, we will discuss the work of two researchers: Dr. Safiya Noble and Dr. Ruha Benjamin. This post expands on our previous post about Dr. Noble’s keynote address — The Problems and Perils of Harnessing Big Data for Equity & Justice — and her book, Algorithms of Oppression: How Search Engines Reinforce Racism. Here, we also introduce the work of Dr. Ruha Benjamin, and specifically the ideas described in her recent book Race After Technology: Abolitionist Tools for the New Jim Code.

Student Data

In order to understand how companies handle student data, we need to first consider the concept of data. Data are characteristics or information that are collected in a manner capable of being communicated or manipulated by some process (Wiktionary, 2020). In Dr. Noble’s keynote speech, she discusses the social construction of data and the importance of paying attention to the assumptions that are made about the characterization of data that are being collected. In her book, Dr. Noble shows how Google’s search engine perpetuates harmful stereotypes about Black women and girls in particular. Dr. Benjamin describes the data justice issues we are dealing with today as ones that come from a long history of systemic injustice in which those in power have used data to disenfranchise Black people. In her chapter titled Retooling Solidarity, Reimagining Justice, Dr. Benjamin (2019) encourages us to “question, which humans are prioritized in the process” (p. 174) of design and data collection. With edtech tools, the humans who are prioritized in the process are teachers and administrators, they are the “clients.” We need to consider and prioritize the affected population, students.

 

When it comes to the collection and use of educational data and interventions for education, there is much work to be done to counteract coded inequities of the “techno status quo.” In her keynote, Dr. Noble offered a list of suggestions for interventions including:

 

  1. Resist making issues of justice and ethics an afterthought or additive
  2. Protect vulnerable people (students) from surveillance and data profiling

 

Center Issues of Justice and Ethics

As described by Tawana Petty in the recent Wired article Defending Black Lives Means Banning Facial Recognition, Black communities want to be seen and not watched. The author writes:

“Simply increasing lighting in public spaces has been proven to increase safety for a much lower cost, without racial bias, and without jeopardizing the liberties of residents.”

What is the equivalent of increasing light in education spaces? What steps are being taken to protect students from surveillance and data profiling? How are teachers and students trained on the digital tools they are being asked to use? How are companies asked to be responsible about the kinds of data they collect?

Schools have legal mandates meant to protect students’ rights, such as the Family Educational Rights and Privacy Act (FERPA) in the U.S. and other policies that protect student confidentiality regarding medical and student educational records. Although a lot of forethought has gone into protecting students’ confidentiality, has the same critical foresight implemented when purchasing hardware and software?

 

In Dr. Noble’s keynote speech, she described the tracking of students on some university campuses through the digital devices they connect to campus Internet or services (like a Library or Learning Management System). The reasoning behind tracking students is to allocate university resources effectively to help the student be successful. However, in this article, Drew Harwell writes about the complex ethical issues regarding students being digitally tracked and an institutions’ obligation to keep students’ data private. So, before software or hardware is used or purchased, privacy and ethics issues must be discussed and addressed. Special energy needs to be dedicated to uncovering any potential “unanticipated” consequences of the technologies as well. After all, without the proper vetting, a bad decision could harm students.

Protect Vulnerable Students

Protecting vulnerable students includes being able to answer Hertz’s question: “Has the company tested its algorithms or other automated processes for racial biases?” But, even when the company has tested its algorithms and automated processes, there is often still work to be done because “unanticipated” results continue to happen. A Twitter spokesperson Liz Kelley recently posted a tweet saying: “thanks to everyone who raised this. we tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do.”

She was responding to the experiment shown below where user @bascule posted: “Trying a horrible experiment…Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?”

Twitter’s machine learning algorithm chose to center the white face instead of the black face when presented with where the white profile picture was shown on top, white space in between, followed by the black profile picture. But it did the same when the black profile picture was shown on top, white space in between, followed by the white profile picture.

A horrible twitter experiment with face recognition. The algorithm selects the white face regardless of placement

As we can see, the selection and use of tools for learning is complicated and requires balancing many factors. As CIRCL Educators we hope to provide some guidance to ensure the safety of students, families, and their teachers. Additionally, we are working to demystify data, algorithms, and AI for educators and their students. This work is similar to the work being done by public interest technologists in the communities and organizations described by both Noble and Benjamin. We don’t have all of the answers, but these topics are ones that we will continue to discuss and write about. Please share your thoughts with us by tweeting @CIRCLEducators.

 

References

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press.

data. (2020, August 12). Wiktionary, The Free Dictionary. Retrieved 15:31, August 26, 2020 from https://en.wiktionary.org/w/index.php?title=data&oldid=60057733.

Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

circuit board

AI and Formative Assessment

by Sarah Hampton

In my last post, I talked about effective formative assessments and their powerful impact on student learning. In this post, let’s explore why AI is well-suited for formative assessment.

  1. AI can offer individualized feedback on specific content.
  2. AI can offer individualized feedback that helps students learn how to learn.
  3. AI can provide meaningful formative assessment outside of school.
  4. AI might be able to assess complex and messy knowledge domains.

Individualized Feedback on Content Learning

I think individualized feedback is the most powerful advantage of AI for assessment. As a teacher, I can only be in one place at a time looking in one direction at a time. That means I have two choices for feedback: I can take some time to assess how each student is doing and then address general learning barriers as a class, or I can assess and give feedback to students one at a time. In contrast, AI allows for simultaneous individualized feedback for each student.

“AI applications can identify pedagogical materials and approaches adapted to the level of individual students, and make predictions, recommendations and decisions about the next steps of the learning process based on data from individual students. AI systems assist learners to master the subject at their own pace and provide teachers with suggestions on how to help them.” (Trustworthy artificial intelligence (AI) in education: promises and challenges)

Going one step further, AI has the ability to assess students without disrupting their learning by something called stealth assessment. While students work, AI can quietly collect data in the background such as the time it takes to answer questions, which incorrect strategies they tried before succeeding, etc. and organize them into a dashboard so teachers can use that data to inform what to focus on or clear up the next day in class. Note: As a teacher, I want the AI to help me do what I do best. I definitely want to see what each student needs in their learning. Also, as a teacher, I want to be able to control when the AI should alert me about intervening (as a caring human) instead of it trying to do something on its own that it isn’t capable of doing well.

Feedback That Helps Students Learn How to Learn

“Two experimental research studies have shown that students who understand the learning objectives and assessment criteria and have opportunities to reflect on their work show greater improvement than those who do not (Fontana & Fernandes, 1994; Frederikson & White, 1997).” (The Concept of Formative Assessment)

In the last post, I noted that including students in the process of self-assessment is critical to effective formative assessment. After all, we ultimately want students to be able to self-regulate their own learning. But, as one teacher, it can sometimes be difficult to remind students individually to stop and reflect on their work and brainstorm ways to close the gap between their current understanding and their learning goal. By contrast, regulation prompts can be built into AI software so students routinely stop and check for understanding and defend their reasoning, giving students a start on learning how to self-regulate.

For example, this is done in Crystal Island, an AI game-based platform for learning middle school microbiology, “students were periodically prompted to reflect on what they had learned thus far and what they planned to do moving forward…Students received several prompts for reflection during the game. After completing the game or running out of time, students were asked to reflect on their problem-solving experience as a whole, explaining how they approached the problem and whether they would do anything differently if they were asked to solve a similar problem in the future.” (Automated Analysis of Middle School Students’ Written Reflections During Game-Based Learning)

      In-game reflection prompt presented to students in Crystal Island

Meaningful Formative Assessment Outside of School

Formative assessment and feedback can come from many sources, but, traditionally, the main source is the teacher. Students only have access to their teacher inside the classroom and during class time. In contrast, AI software can provide meaningful formative assessment anytime and anywhere which means learning can occur anytime and anywhere, too.

In the next post, we’ll look at how one AI tool, ASSISTments, is using formative assessment to transform math homework by giving meaningful individualized feedback at home.

Assessing Complexity and Messiness

In the first post of the series, I discussed the need for assessments that can measure the beautiful complexity of what my students know. I particularly like the way Griffin, McGaw, and Care state it in Assessment and Teaching of 21st Century Skills:

“Traditional assessment methods typically fail to measure the high-level skills, knowledge, attitudes, and characteristics of self-directed and collaborative learning that are increasingly important for our global economy and fast-changing world. These skills are difficult to characterize and measure but critically important, more than ever. Traditional assessments are typically delivered via paper and pencil and are designed to be administered quickly and scored easily. In this way, they are tuned around what is easy to measure, rather than what is important to measure.”

We have to have assessments that can measure what is important and not just what is easy. AI has the potential to help with that.

For example, I can learn more about how much my students truly understand about a topic from reading a written response than a multiple choice response. However, it’s not possible to frequently assess students this way because of the time it takes to read and give feedback on each essay. (Consider some secondary teachers who see 150+ students a day!)

Fortunately, one major area for AI advancement has been in natural language processing. AIs designed to evaluate written and verbal ideas are quickly becoming more sophisticated and useful for providing helpful feedback to students. That means that my students could soon have access to a more thorough way to show what they know on a regular basis and receive more targeted feedback to better their understanding.

While the purpose of this post is to communicate the possible benefits of AI in education, it’s important to note that my excitement about these possibilities is not a carte blanche endorsement for them. Like all tools, AI has the potential to be used in beneficial or nefarious ways. There is a lot to consider as we think about AI and we’re just starting the conversation.

As AI advances and widespread classroom implementation becomes increasingly more possible, it’s time to seriously listen to those at the intersection of the learning sciences and artificial intelligence like Rose Luckin. “Socially, we need to engage teachers, learners, parents and other education stakeholders to work with scientists and policymakers to develop the ethical framework within which AI assessment can thrive and bring benefit.” (Towards artificial intelligence-based assessment systems)

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

We are still at the beginning of our conversation around AI in Education. What do you think? Do the possible benefits excite you? Do the possible risks concern you? Both? Let us know @CIRCLEducators.

Students sit around a large paper on the floor and draw on, look at, or point to it.

Considering Techquity in the Classroom

By Merijke Coenraad

Merijke Coenraad is a PhD Candidate in the Department of Teaching & Learning, Policy & Leadership in the College of Education at the University of Maryland. She is a former middle school teacher. Her research focuses on the intersections of educational technology and equity including the creation of materials, platforms, and experiences in partnership with teachers and youth through participatory design methods.

Flashback to a Spanish Classroom (2016)

Chromebooks out. Hushed silence. Each student leaned over their computer. Tension in the air. I yell, “GO! “ and with one word, the room erupts and groups hurriedly work together to identify vocabulary words before their classmates. In loud whispers students ask their partners for words, “Calcentines, who has socks?” One mistake and the group will have to start over; the stakes are high, and no star student can single handedly win the game for their peers. 

Quizlet transformed flashcards, a time consuming (and often lost or forgotten) physical learning tool into a digital learning experience. My students practiced their vocabulary words through drills and games all week and on Friday, we played Quizlet Live.

When I was still in the classroom, I loved to bring new technology into my social studies and Spanish lessons. I got excited discovering tools like EdPuzzle and Padlet when they were first breaking onto the education stage. With 1 to 1 Chromebooks in my middle school classroom, there was hardly a class period where students were not somehow connected to technology and each of these technologies meant creating a new account. Looking back, I realize that I was naïve while teaching. As I brought tool after tool to my students, I didn’t think deeply about the data collection ramifications and the way that the very tools that could enhance learning might be treating my students inequitably and perpetuating the structural racism and human biases that I worked each day to dismantle. The educational technology that I brought into my classroom had positive effects, but it also had hidden consequences, most of which I might never know.

Four years after leaving the classroom to begin my PhD, my work focuses on one thing, Techquity, or the intersection of technology and equity. This focus is driven by the students I taught and the many times I saw technology act as both an access point and a barrier to their education. Even though I wasn’t thinking about data collection, algorithmic bias, and the effects of AI for the students in my classroom, I was still focused on how technology helped and hindered my students’ education. But those barriers and hindrances go beyond the devices and internet access I have long considered. In the last year, I have learned a lot about forces within and around technology that cause inequities. I have learned about the Coded Gaze of AI Technologies from Joy Buolamwini and the New Jim Code from Ruha Benjamin. I’ve learned about the biases inherent in the very design of technologies with Sara Wachter-Boettcher and how algorithms can be Weapons of Math Destruction from Cathy O’Neil. It has led me to focus on how I can not only be more cognizant of the biases of technology, but also teach students about them.

Techquity: Co-designing with Kids

To learn more about what kids think about Techquity concerns, I partnered with a youth design team to hear what they had to say about Techquity and learn which Techquity concerns were of the most interest to them. I find that kid insight is critical whenever I am discovering new topics to teach to students. The team was constructed of 7 Black youth between the ages of 8 and 13 who meet twice a week to design technologies and learn about being a designer.

Let’s look a little bit at what the kids had to say about Techquity.

While they didn’t have the vocabulary to name algorithmic bias or biases in voice recognition technology, the kids quickly began offering examples of how technologies can be good and bad and how even single technologies can have good and bad sides. For example, one group identified Siri as helpful because “she” can give information without typing, but they also were worried that Siri doesn’t always understand them and “SIRI CAN LISTEN TO US!!!!” While the AI in their phones allowed the students to access all sorts of information, they were not immune to considerations of what it meant for a device to always be listening for, “Hey Siri…”

As our conversation turned and I introduced the kids to some common examples of Techquity concerns such as data collection, targeted advertising, misidentification by AI, and non-diverse tech design teams, the kids continued to describe their own examples. They could recollect times when they received targeted advertising based on location or a recent website visit.

Techquity Concerns

10 common Techquity concerns we discussed are:

  • Algorithms (computer programs) don’t treat everyone fairly
  • Technology development teams are frequently not diverse
  • Alexa, Google Home, and Siri are always listening to me
  • I get personalized ads based on data companies collect about me
  • Technology is not always accessible for individuals with disabilities
  • Companies sell my data
  • Sensors and systems like Alexa, Google Home, and Siri get confused about how I look or what I say
  • People don’t understand how technology works
  • Machine learning and facial recognition isn’t trained well enough to recognize everyone

The kids each ranked the 10 Techquity concerns from “very important to me” to “not very important to me.” The two most highly ranked ideas were algorithmic bias and non-diverse tech companies. The kids were especially concerned that individuals who looked like them were not being represented on design teams when they themselves were and what this meant for the technologies being designed.

As their final design task, the kids designed ways to teach other kids about Techquity by drawing their ideas out on an online platform mimicking paper and pencil. Interestingly, the kids didn’t want to move away from technology just because it could be biased, they just wanted it to be created in more equitable ways and to be used to teach others. Their teaching often included advanced algorithms and even AI. They designed scenarios using robots and adaptive software to allow other kids to experience obvious Techquity concerns and learn from their experiences. One girl, Persinna, explicitly discussed the three-member design team shown in her game as having 2 girls and 1 boy because “that is Techquity.” Kabede felt very strongly that data collection by tech companies was a big concern. He started making connections to actual tools he knows such as DuckDuckGo, a search engine that does not profile users and focuses on user privacy.

What I Would Consider Now If I Were Still a Teacher

I’d start from what these kids already know about Techquity and how algorithms and AI are affecting their lives and build on that. I would educate students about the biases inherent in Google searches, which sort not by popularity of links as is commonly assumed, but based on user profiles and advertising. I would use Kabede’s recommendation and have students use a search engine like DuckDuckGo to prevent tracking and allow for private searches. I would challenge students to think about where algorithms, AI, and technology design are already affecting their lives and how technologies might work better for some individuals than they do for others. We would talk about the sensors in automatic sinks, paper towel dispensers, and medical devices and how those sensors work based on light, but oftentimes work better for people with lighter skin. We would discuss Joy Buolamwini’s experiences and work and talk about how machine learning training sets are often not adequate to identify all people well and how this has direct consequences for the use of AI in policing and surveillance.

While the students in my classroom wouldn’t be the ones causing the technology bias, I would make sure they were aware of it and how it had direct implications for their lives. Most of all, I would base these discussions in students’ lived experiences. Just like the kids on the design teams, it is inevitable that my students experienced technology bias, they just might not have had words for it or known why it was happening. The more I could teach my students and bring Techquity concerns to their knowledge, the more they could protect themselves (and their communities) and make educated decisions about their lives with technology. I know that my middle school students wouldn’t give up their technology and knowing about the biases held by the designers of that technology probably wouldn’t change their opinions of technology being, as Joshua said in the design session, “the best thing ever,” knowing more about their digital footprint and how companies are using their information gives them a small advantage. In this case, knowledge of Techquity concerns could give them power over their data and their technology use.