Category Archives: Written by: Sarah Hampton

ASSISTments: A Forward-thinking Formative Assessment AI Ready to Use in Your Classroom Today

by Sarah Hampton

In my last post, I discussed the ways AI can enhance formative assessment. In this post, let’s take a look at the AI example I’m most excited about and how it’s already benefited 11,000 teachers!

ASSISTments seems both practical and forward thinking, a unique combination. Sometimes it can be frustrating getting excited about new technologies when they’re still in development and not yet ready for the classroom. But, unlike many cutting edge projects I read about, ASSISTments is ready to implement now.

In their own words, “ASSISTments is more than an assessment tool. It improves the learning journey, translating student assessment to skill building and mastery, guiding students with immediate feedback, and giving teachers the time and data insights to drive every step of their lesson in a meaningful way.”

ASSISTments works through a 4 step process to help you get started:

  1. Create assignments.
    Teachers select questions from existing or custom question banks. I was really impressed with the number and variety of sets already on the site. There are question sets from select open educational resources, textbook curricula, and released state tests ready to be assigned. There are also pre-made general skill-building and problem-solving sets. Note, everything the students see is assigned by you, the teacher.
  2. Assist students through immediate feedback.
    As students complete their assigned problems, they might receive hints and explanations to help them understand. Check out these screenshots of the platform. (See more in The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching)
  3. buggy message: No. You might be thinking that the area is one half base times height, but you are looking for the perimeter.

    Figure: An ASSISTments message shown just before the student hits the “done” button, showing two different hints and one buggy message that can occur at different points.

    Students immediately know if they’re right or wrong and can answer multiple times for partial credit, and, at the end of each assignment, each student receives an outcome report detailing their performance.

  4. Assess class performance.
    Data is also available to the teacher. Check out how easy they make it for teachers to gauge student progress.
  5. Symbols of Success. Quickly assess student and class performance using these symbols of success on your assignment report. Four symbols are Green check mark, green X, red X, red X with highlight.

    Figure. An easy way to visualize student performance.

    Figure. A popular ASSISTments report organizes student homework results in a grid–with tasks in columns and students in rows and enables teachers to quickly identify which problems to review and what the common errors were, as indicated by the annotations.

  6. Analyze answers together (with your students).
    After teachers see which problems were routinely missed, class time can be spent on the most needed concepts. As the ASSISTments site says, “Homework and classwork flow seamlessly into instruction of new material.” You can use the information you gain from the reports to determine what you will cover the next day. If everyone gets a concept you can move on and not waste valuable class time covering material that is understood. ASSISTments can also help support groups or personalized work.

This four-step process models what needs to happen in effective formative assessment, which was discussed in the second post of this series. Students engage in an assessment for learning (in this case it’s their homework), receive specific, supportive, timely, and focused feedback on how to close the gap between their current and desired understanding, and the results of the assessment are used to drive the next learning encounter.

Based on the undergirding principles of formative assessment, it’s no surprise that ASSISTments meets the rigorous What Works Clearinghouse standards without reservation, and receives a strong rating as an evidence-based PK-12 program by Evidence for ESSA. Based on a randomized controlled trial examining 2,728 seventh grade students in Maine, on average, the use of ASSISTments “produced a positive impact on students’ mathematics achievement at the end of a school year” equivalent to a student at the 50th percentile without the intervention improving to the 58th percentile with it. In addition, as seen in other formative assessment studies, the largest gains were seen by students with low prior achievement. (Online Mathematics Homework Increases Student Achievement) ASSISTments helps you by helping the students who need it the most and seems to allow you to be in multiple places at once!

One of the reasons I’m so excited about this program is because it was thoughtfully designed with teachers and students in mind. Neil and Cristina Heffernen, the co-creators of ASSISTments, write this in The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching.

“In many ways the list of problem sets in ASSISTments is a replacement for the assignment from the textbook or the traditional worksheet with questions on it. This way the teachers do not have to make a drastic change to their curriculum in order to start using the system. But more importantly they can make more sense of the data they get back since they are the ones who selected and assigned the problems.This is in contrast to the idea of an artificial intelligence automatically deciding what problem is best for each student. While this is a neat idea, it takes the teacher out of the loop and makes the computer tutorial less relevant to what is going on in the classroom.”

Exactly! I want formative assessment–in and out of the classroom–to meaningfully guide my instruction. Furthermore, I really appreciate that ASSISTments was designed to give teachers assistance in the workflow, to inform them about what students are learning, and, more importantly, not learning, so that teachers can make an informed decision on how to best help their students. I hope including teachers in the design process and helping teachers work more effectively with their students becomes a standard for educational AIs.

You need a school verified Google Classroom or paid Canvas account to use it, but ASSISTments itself is free! Unfortunately, our school uses a basic Canvas account, but customer service at ASSISTments allowed me to have a teacher role using a personal account so I could fully explore the program. I’m hopeful that this can be a transformative homework solution for math students! I think it will be worth your time to see what ASSISTments can offer you.

Note, I am not affiliated with ASSISTments and was not paid or asked to write about ASSISTments. I learned about it from CIRCL, and I was intrigued because I teach mathematics, but everything I discovered about it was through my research and my excitement about its potential is my own.

Watch this short video to learn more about ASSISTments, and read more about co-creator Neil Heffernen in his CIRCL Perspective.

Thank you to ASSISTments’ co-creator Cristina Hefferenen and to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

circuit board

AI and Formative Assessment

by Sarah Hampton

In my last post, I talked about effective formative assessments and their powerful impact on student learning. In this post, let’s explore why AI is well-suited for formative assessment.

  1. AI can offer individualized feedback on specific content.
  2. AI can offer individualized feedback that helps students learn how to learn.
  3. AI can provide meaningful formative assessment outside of school.
  4. AI might be able to assess complex and messy knowledge domains.

Individualized Feedback on Content Learning

I think individualized feedback is the most powerful advantage of AI for assessment. As a teacher, I can only be in one place at a time looking in one direction at a time. That means I have two choices for feedback: I can take some time to assess how each student is doing and then address general learning barriers as a class, or I can assess and give feedback to students one at a time. In contrast, AI allows for simultaneous individualized feedback for each student.

“AI applications can identify pedagogical materials and approaches adapted to the level of individual students, and make predictions, recommendations and decisions about the next steps of the learning process based on data from individual students. AI systems assist learners to master the subject at their own pace and provide teachers with suggestions on how to help them.” (Trustworthy artificial intelligence (AI) in education: promises and challenges)

Going one step further, AI has the ability to assess students without disrupting their learning by something called stealth assessment. While students work, AI can quietly collect data in the background such as the time it takes to answer questions, which incorrect strategies they tried before succeeding, etc. and organize them into a dashboard so teachers can use that data to inform what to focus on or clear up the next day in class. Note: As a teacher, I want the AI to help me do what I do best. I definitely want to see what each student needs in their learning. Also, as a teacher, I want to be able to control when the AI should alert me about intervening (as a caring human) instead of it trying to do something on its own that it isn’t capable of doing well.

Feedback That Helps Students Learn How to Learn

“Two experimental research studies have shown that students who understand the learning objectives and assessment criteria and have opportunities to reflect on their work show greater improvement than those who do not (Fontana & Fernandes, 1994; Frederikson & White, 1997).” (The Concept of Formative Assessment)

In the last post, I noted that including students in the process of self-assessment is critical to effective formative assessment. After all, we ultimately want students to be able to self-regulate their own learning. But, as one teacher, it can sometimes be difficult to remind students individually to stop and reflect on their work and brainstorm ways to close the gap between their current understanding and their learning goal. By contrast, regulation prompts can be built into AI software so students routinely stop and check for understanding and defend their reasoning, giving students a start on learning how to self-regulate.

For example, this is done in Crystal Island, an AI game-based platform for learning middle school microbiology, “students were periodically prompted to reflect on what they had learned thus far and what they planned to do moving forward…Students received several prompts for reflection during the game. After completing the game or running out of time, students were asked to reflect on their problem-solving experience as a whole, explaining how they approached the problem and whether they would do anything differently if they were asked to solve a similar problem in the future.” (Automated Analysis of Middle School Students’ Written Reflections During Game-Based Learning)

      In-game reflection prompt presented to students in Crystal Island

Meaningful Formative Assessment Outside of School

Formative assessment and feedback can come from many sources, but, traditionally, the main source is the teacher. Students only have access to their teacher inside the classroom and during class time. In contrast, AI software can provide meaningful formative assessment anytime and anywhere which means learning can occur anytime and anywhere, too.

In the next post, we’ll look at how one AI tool, ASSISTments, is using formative assessment to transform math homework by giving meaningful individualized feedback at home.

Assessing Complexity and Messiness

In the first post of the series, I discussed the need for assessments that can measure the beautiful complexity of what my students know. I particularly like the way Griffin, McGaw, and Care state it in Assessment and Teaching of 21st Century Skills:

“Traditional assessment methods typically fail to measure the high-level skills, knowledge, attitudes, and characteristics of self-directed and collaborative learning that are increasingly important for our global economy and fast-changing world. These skills are difficult to characterize and measure but critically important, more than ever. Traditional assessments are typically delivered via paper and pencil and are designed to be administered quickly and scored easily. In this way, they are tuned around what is easy to measure, rather than what is important to measure.”

We have to have assessments that can measure what is important and not just what is easy. AI has the potential to help with that.

For example, I can learn more about how much my students truly understand about a topic from reading a written response than a multiple choice response. However, it’s not possible to frequently assess students this way because of the time it takes to read and give feedback on each essay. (Consider some secondary teachers who see 150+ students a day!)

Fortunately, one major area for AI advancement has been in natural language processing. AIs designed to evaluate written and verbal ideas are quickly becoming more sophisticated and useful for providing helpful feedback to students. That means that my students could soon have access to a more thorough way to show what they know on a regular basis and receive more targeted feedback to better their understanding.

While the purpose of this post is to communicate the possible benefits of AI in education, it’s important to note that my excitement about these possibilities is not a carte blanche endorsement for them. Like all tools, AI has the potential to be used in beneficial or nefarious ways. There is a lot to consider as we think about AI and we’re just starting the conversation.

As AI advances and widespread classroom implementation becomes increasingly more possible, it’s time to seriously listen to those at the intersection of the learning sciences and artificial intelligence like Rose Luckin. “Socially, we need to engage teachers, learners, parents and other education stakeholders to work with scientists and policymakers to develop the ethical framework within which AI assessment can thrive and bring benefit.” (Towards artificial intelligence-based assessment systems)

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

We are still at the beginning of our conversation around AI in Education. What do you think? Do the possible benefits excite you? Do the possible risks concern you? Both? Let us know @CIRCLEducators.

2 hands one places ruler on notebook the other holds writing utensil

Formative Assessment

By Sarah Hampton

In my last post, I talked about how artificial intelligence can improve standardized summative assessments. In the next three posts, I want to: 1) consider how focusing on formative assessments could be even more helpful, 2) explore why AI is well-suited for it, and 3) showcase how one AI tool is using formative assessment to transform math homework.

For this post, let’s begin by comparing summative and formative assessments and the value they bring to the education table.

High stakes standardized summative assessments and end-of-course exams are designed to give states, districts, and schools broad data based on average scores of many students to help them evaluate program effectiveness. While these assessments may also give a snapshot of a student’s overall understanding in a course and hold value for student placement the following year, the resulting data is not intended to help students learn more about the subject. For example, Jonathan Supovitz states in his article, Is High-Stakes Testing Working?, “These studies have typically found that data [from high-stakes assessment] provide general information about student performance but lack the nuance to provide fine-bore instructional guidance.”

On the other hand, formative assessments are designed to give a snapshot of a student’s specific content understanding with the goal of improving student learning based on the results.

“Teachers and schools can use formative assessment to identify student understanding, clarify what comes next in their learning, trigger and become part of an effective system of intervention for struggling students, inform and improve the instructional practice of individual teachers or teams, help students track their own progress toward attainment of standards, motivate students by building confidence in themselves as learners, fuel continuous improvement processes across faculties, and, thus, drive a school’s transformation.” Maximizing the Power of Formative Assessments

Note: End of term or end of the year grades given by a teacher are also a summative assessment, but are different than standardized tests and have different issues. Those issues are beyond the scope of this post.

While reading more about types of assessments, I realized one of the reasons I feel the tension between teaching for students and teaching for tests is because we [policy makers, administrators, teachers, families, etc.] keep focusing on summative assessments to do a job they were never intended to do. If our goal is to improve student learning, then we should use the tool designed for that and leverage more formative assessment. As stated in chapter five of New Assessments and Environments for Knowledge Building, “Assessment reform requires a radical shift from a ‘summative assessment which ranks individuals’ to a ‘formative assessment of the learning environment that helps all the students reach the next level of learning.’”

Formative assessments are really powerful when used the right way. Take a look at these two studies.

The authors of Using Formative Assessment and Metacognition to Improve Student Achievement share, “In a series of landmark review articles, Black and Wiliam (1998a, 1998b, 2009) dramatically highlighted formative assessment’s contribution to precollege student learning. They concluded that achievement gains generated by using formative assessment across a range of content domains were among the largest ever reported for education interventions. Notably, the largest gains were realized among low achievers.” Of the over 250 publications Black and William analyzed, approximately 20 demonstrated effect sizes that would represent “a gain that is roughly double the average growth U.S. children in the upper primary to lower secondary grades would be expected to make on standardized tests in a school year.” (The Impact of Formative Assessment and Learning Intentions on Student Achievement)

Similarly, the authors of Formative Assessment and Next-Generation Assessment Systems: Are We Losing an Opportunity? highlight a review of 196 studies on feedback (an essential component of formative assessments) which “reported that feedback had an average effect size of 0.79 standard deviation – an effect greater than student prior cognitive ability, socioeconomic background, and reduced class size (Hattie & Timperley, 2007, p. 83).”

Notice that I said when used the right way. According to Black and Wiliam (2009), effective formative assessment involves all of these:

  • teachers making adjustments to teaching and learning in response to assessment evidence
  • students receiving feedback about their learning with advice on what they can do to improve
  • students participating in the process through self-assessment

In addition, Valerie Shute explains in Focus on Formative Feedback that feedback should be:

  • non evaluative
  • supportive
  • timely
  • specific
  • valid
  • objective
  • focused
  • clear

Dr. Shute goes on to give an analogy comparing formative feedback to ‘a good murder’ that made me laugh: “Formative feedback might be likened to ‘a good murder’ in that effective and useful feedback depends on three things: (a) motive (the student needs it), (b) opportunity (the student receives it in time to use it), and (c) means (the student is able and willing to use it).”

Effective teachers continually use different strategies for formative assessment. “All too often, the term ‘formative assessment’ conjures images of quizzes and tests, while in reality, formative assessment is a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning.” (Formative Assessment and Next-Generation Assessment Systems: Are We Losing an Opportunity?) Formative assessment could be something as quick and simple as reading body language and then intervening with prompts or asking good questions and then offering constructive feedback. At times, it might be more sophisticated like analyzing data from an ungraded quiz that incorporates student regulated components and then using the results to determine what content will be taught the next day and how it should be approached.

I feel empowered knowing that there are many ways I can impact student learning daily by incorporating formative assessments. This is something within my control that’s been shown to work!

As part of completing lesson plans at my school, teachers are encouraged to plan the types of formative assessment they will use throughout the week. I appreciate the reminder to intentionally incorporate different strategies. These links are a few of the practical resources I like.

In addition, you may find Kip Glazer’s recommendations particularly helpful for virtual learning in her post called Why Assessment?

In the next two posts, we’ll look at how AI can be used for formative assessment and then explore ASSISTments, a successful AI math homework tool.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

The photo in this post is by Tamarcus Brown on Unsplash

A group of 5 people work at a table.

My Students are Deep, Complex, and Beautiful. Assessments Should Be, Too.

by Sarah Hampton

I have a love/hate relationship with standardized assessments. I understand they’re necessary so we can get a picture of what our students are taking away from from their school experience. BUT, there are so many times when I find myself making decisions about my instruction based on how I think it will affect the summative assessment. Sometimes I am even choosing between holistic practices advocated by the standards themselves for learning and superficial practices I’ve seen work to improve test scores. Teachers are accountable to the test scores, so there is pressure to make decisions that give quick gain but may not result in long-term learning.

This conflict seems to be a common experience. When I talk to fellow educators about new pedagogies, tools, or curricula, one of the very first questions asked is always, “How will this affect my test scores?” Bottom line: high stakes assessments are still driving instructional decisions, for better or worse.

If the summative assessments were perfectly designed to really assess learning, then this would be ideal, right? If tests were truly an accurate reflection of everything we want our students to know for future learning, to be able to do, and help them be, then making instructional decisions that cause those scores to increase would mean our students were getting closer to those goals. However, I have a major bone to pick: Current standardized tests are an oversimplification of the depth, complexity, and beauty of my students.

I get that it’s important to prove that schools are doing what we say they’re doing. It’s important that we’re held accountable for all students learning in ways that benefit them. My problem? I don’t think current standardized tests are actually proving that students are learning. Most tests (even the adaptive ones I’ve seen) are still multiple choice, true/false, drop down, matching, or, occasionally, fill in the blank or short answer. The outcomes I want from my students simply cannot be properly evaluated that way! In addition, I shouldn’t feel pressured to sacrifice meaningful instructional activities that support my students’ complexity, depth, and beauty in order to make a shallow snapshot look better.

State and national assessments aren’t going away. I’m not even suggesting that they should. As a math and science teacher, I’m all about making informed decisions based on the right data. Unfortunately, as Pellegrino and Quellmalz note in Perspectives on the Integration of Technology and Assessment

“in the realm of high-stakes assessment for No Child Left Behind (NCLB) accountability, a number of regulatory, economic, and logistical issues have constrained the breadth and depth of the content and performance standards assessed in annual on-demand tests. Standard, multiple-choice item formats continue to dominate large-scale, computer-based, high-stakes testing, resulting in an over reliance on simple, highly structured problems that tap fact retrieval and the use of algorithmic solution procedures.”

So, because simple, highly structured problems are the things most current tests assess, teachers are unintentionally incentivized to promote surface level thinking around oversimplified problems.

What if there was a way to design an assessment that wasn’t constrained that way? What if assessments were made of messy problems and could honor multiple student pathways to solutions? What if they could analyze and reward student thinking on a topic? Better standardized assessments could be one possible way to reconcile what and how I want my students to learn with how they can express what they know.

Here comes the good news: the learning sciences community is integrating technology in ways that are rapidly improving assessments so they can give us a more complex and accurate picture of what our students know and do! That means that investing in the complexity of our students would translate to better scores because the assessment could give us a more comprehensive image of their understanding. It’s like the assessment world is heading into 3-d printing instead of 2-d black and white line drawings! No more tug of war between deep, meaningful instruction and what I feel like I have to do for the test because, now, they can be the same thing. Yay!

Consider My Science Tutor (MyST), a type of NSF funded project called an intelligent virtual tutor. MyST uses artificial intelligence to interpret and assess how well students understand science concepts by analyzing their dialog. The principal investigators of MyST say that “its goal is to assess student understanding of concepts rather than facts, which is very important to prepare students and the future workforce in STEM.” Student understanding of concepts rather than facts? That’s what I want!

Before advancements like the ones that make MyST a reality, a student’s answer on an assessment had to exactly match the correct answer designated by the programmer. That’s why standardized tests relied so heavily on multiple choice, T/F, etc. Now, machine learning allows computers like MyST to ‘learn’ what understanding a topic sounds like by analyzing speech from multiple dialog sessions–over 13,000 sessions for MyST, in fact. Then, it can analyze how well a new student has learned the concept based on what MyST ‘knows’. This is the essence of artificial intelligence: a machine doing what was previously thought to be unique to humans. In Towards artificial intelligence-based assessment systems, Rose Luckin says, “AI is a powerful tool to open up the ‘black box of learning’, by providing a deep, fine-grained understanding of when and how learning actually happens.”

This level of comprehension analysis was formerly only possible through one-on-one evaluation by teachers or researchers–an unrealistic burden on human time. Now, as Pellegrino and Quellmalz say:

“In addition to assessment of student knowledge and skills in highly structured problems with one right answer, technology can also support the design of complex, interactive tasks that extend the range of knowledge, skills, and cognitive processes that can be assessed.

The implications of this could transform the assessment landscape by allowing multiple computers to ‘listen’ to each student at once and check for genuine understanding. There’s something beautiful about that. Even more, when assessments can help teachers evaluate complex understanding using open-ended problems, teachers are incentivized to promote deep thinking of deep problems. My students are deep, complex, and beautiful. Assessments should be, too.

How about you? How do you resolve the ever present conflict between good teaching and standardized assessments? Do you think standardized assessments cover important issues? What do you think about creating new assessments using AI? Join the conversation by tweeting @CIRCLEducators.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Five CIRCL Educators stand next to a Cyberlearning 2019 banner

Harnessing Educational Data: Discussing Dr. Safiya Noble’s Keynote from Cyberlearning 2019

By Pati Ruiz, Sarah Hampton, Judi Fusco, Amar Abbott, and Angie Kalthoff

In October 2019 the CIRCL Educators gathered in Alexandria, Virginia for Cyberlearning 2019: Exploring Contradictions in Achieving Equitable Futures (CL19). For many of us on the CIRCL Educators’ team it was the first opportunity for us to meet in person after working collaboratively online for years. In addition, CL19 provided us with opportunities to explore learning in the context of working with technology and meet with researchers with diverse expertise and perspectives. We explored the tensions that arise as research teams expand the boundaries of learning, and explored how cyberlearning research might be applied in practice.

One of the topics, we thought a lot about at CL19, is algorithms. We had the opportunity to hear from keynote speaker Safiya Noble, an Associate Professor at UCLA, and author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press). In her Keynote, The Problems and Perils of Harnessing Big Data for Equity & Justice, Dr. Noble described the disturbing findings she uncovered when she started investigating algorithms related to search. She was not satisfied with the answer that the way algorithms categorized people, particularly girls of color, was what “the public” wanted. She dug in deeper and what she said really made us think.

This keynote is related to some of the conversations we’re having about Artificial Intelligence (AI), so we decided to re-watch the recorded version and discuss the implications of harnessing Big Data for students, teachers, schools, and districts. Big Data is crucial in much work related to AI. Algorithms are crucial. We bring this into our series on AI because even though math and numbers seem like they are not culturally-biased, there are ways that they are and can be used to promote discrimination. In this post, we don’t summarize the keynote, but we tell you what really got us thinking. We encourage you to watch it too.

Besides discussing algorithms for search, Dr. Noble also discusses implications of technology, data, and algorithms in the classroom. For example, Dr. Noble shared how she breaks down how a Learning Management System works for her students so that they know how the technology they are using can inform their professors of how often and how long they log into the system (among other things). She said they were often surprised that their teachers could learn these things. She went on to say:

“These are the kinds of things that are not transparent, even to the students that many of us are working with and care about so deeply. “

Another idea that particularly resonated with us, as teachers, from the talk is the social value of forgetting. Sometimes there is value in digitally preserving data, but sometimes there is more value in NOT documenting it.

“These are the kinds of things when we think about, what does it mean to just collect everything? Jean–François Blanchette writes about the social value of forgetting. There’s a reason why we forget, and it’s why juvenile records, for example, are sealed and don’t follow you into your future so you can have a chance at a future. What happens when we collect, when we use these new models that we’re developing, especially in educational contexts? I shudder to think that my 18-year-old self and the nonsense papers (quite frankly who’s writing a good paper when they’re 18) would follow me into my career? The private relationship of feedback and engagement that I’m trying to have with the faculty that taught me over the course of my career or have taught you over the course of your career, the experimentation with ideas that you can only do in that type of exchange between you and your instructor, the person you’re learning from, that being digitized and put into a system, a system that in turn could be commercialized and sold at some point, and then being data mineable. These are the kinds of real projects that are happening right now.”

We are now thinking a lot about how to help students and teachers better understand how our digital technology tools work, how we should  balance the cost of using technology to help learners with the potential problem of hyper-datafication of saving everything and never letting a learner move past some of their history.

As we think through this tension, and other topics in the keynote, some of the questions that came up for us include:

  • What information is being collected from our students and their families/homes and why? Where does the information go?
  • Who is creating the app that is collecting the data? Are they connected to other programs/companies that can benefit from the data?
  •  What guidelines for privacy does the software company follow? FERPA/COPPA? Do there need to be more or updated standards? What policies aren’t yet in place that we need to protect students?
  • What kinds of data is being digitally documented that could still be available years after a student has graduated? How could that impact them in job searches? Or, what happens when our students, who have documented their whole lives digitally, want to run for public office?
  • There are well-documented protocols for destroying students’ physical work, so what documented protocols are in place for their digital work?
  • Are school devices (e.g., Chromebooks or iPads) that contain student sensitive data being shared? Are all devices wiped between school years?
    • Students clean out their desks and lockers at the end of the school year, should we be teaching them to clean out their devices?
    • Do students have an alternative to using software or devices if they or their families have privacy concerns? Should they?
  • Is someone in your district (or school) accountable for privacy evaluation, software selection, and responsible use?
    • How are teachers being taught what to look for and evaluate in software?

In future posts, we’ll cover some more of what Dr. Noble suggested based on her work including the following points she made:

  1. (Re)consider the effect of hyper-datafication
  2. Resist making issues of justice and ethics an afterthought or additive
  3. Protect vulnerable people (students) from surveillance and data profiling
  4. Fund critical digital media research, literacy programs, and education
  5. Curate the indexable web, create multiple paths to knowledge
  6. Reduce technology over-development and its impact on people and the planet
  7. Never give up on the right things for the planet and the people

Dr. Noble on stage at the Cyberlearning 2020 meeting.

Finally, some of us have already picked up a copy of Algorithms of Oppression: How Search Engines Reinforce Racism and if you read it, we would love to hear your thoughts about it. Tweet @CIRCLEducators. Also, let us know if you have questions or thoughts about the keynote and/or algorithms.

Abstract Binary Chip

Introduction to Artificial Intelligence in Education

By Sarah Hampton

As an avid fan of CIRCL and the wife of a programmer, it’s safe to say I’m somewhat of a technophile. I’m typically and happily an early adopter of ed tech. Even so, my initial reaction to artificial intelligence (AI) in education was somewhere between skeptical and antagonistic. Like many teachers I’ve talked with, I was concerned that using AI would weaken the human connection that’s so important for a healthy school environment. I was and remain concerned about equity and access issues around technology. I also have serious questions about my students’ privacy. However, as I started digging into what AI actually is (and isn’t), I realized that I should learn more about it so I can offer my voice as a teacher to the communities developing the tools they want us to use. Over the summer, with the CIRCL Educator team, I’ll be digging into AI. In a series of posts, I will share the most important, perspective changing, and exciting things I’ve learned about artificial intelligence and what it might mean for education. I hope you’ll join me and let me know your questions and concerns.

First, let’s clarify artificial intelligence. What is and isn’t AI?

Let’s start with defining AI as a machine doing something we formerly thought only humans could do. More specifically, though, AI is just a specific type of computer software. The difference between AI and the software you’re already familiar with is that it doesn’t follow a linear set of simple instructions. Instead, AI uses algorithms or rules that are set initially by the developer (a human) and then the AI builds a model when it runs through data. The AI continually fine-tunes the model as it encounters more data. That’s why some people say AI “learns” or “teaches itself.” It’s not learning like a human would, it’s able to build models that optimize for given criteria set in the algorithm. (For my math colleagues, think regressions/curve fitting on steroids.) The names AI or machine learning, which is a specific approach used in AI, make it sound like the software takes on a life of its own. That’s not true. As our English Language Arts colleagues could tell us, it’s just an example of anthropomorphism–ascribing human characteristics to a nonhuman object.

We’ll consider different types of AI in a future post. For now, we will say look at AI in two ways; on one hand, compared to prior types of software, AI is extremely sophisticated and capable of things we thought were unique to humans twenty years ago.

Let’s take an example you might be familiar with–Grammarly. (Full disclosure: I don’t use Grammarly routinely, but I decided to investigate after seeing their YouTube ad about 500 times and am guessing you may have seen those ads, too.) AI, like the type Grammarly uses, can “learn” what good writing looks like. It was trained about features of good writing by being shown hundreds of thousands of sentence pairs. In the pairs, one of the sentences was written poorly and one was a well-written target sentence. From these pairs, Grammarly “gained insight” on elements of good writing. However, while the AI learns, it doesn’t understand why a sentence is good like a human can. It can only recognize multiple detailed features or patterns that are part of the examples. Then, when the AI receives a new writing sample uploaded by someone, it can compare the new writing sample to the patterns it detected in the training examples to determine how closely the new writing sample matches the features in the training sentences. The AI provides guidance to the human writer by offering suggestions that would help the writing sample match the exemplary types of writing from the training.

That’s one high-level example for today. I have other projects to go through in later posts, but I want to go back to thinking about how we define artificial intelligence. A recent EdSurge article brought up a great point, “when educators have different concepts of what makes a piece of technology or tool ‘intelligent,’ it means that a variety of tools get lumped into the AI category—even if they aren’t technically ‘artificially intelligent.’” Let’s think about what’s typically considered artificial intelligence to start to define it. I say start to define it as the field of AI is rapidly changing, and folks in the field are still working on a more precise definition. I’m making a checklist to help us differentiate AI from other kinds of technology.

Checklist: Is it AI?

TechnologyIs it AI?Why?
Projector or Document CamNoThese are useful tools, and let us do smart things, but they’re more hardware than software.
Smart BoardNoThis is a mix of hardware and software, but the software doesn’t improve as you use it.
Basic robot like Ozobot or Dash and DotNoCool robots, but the software doesn’t “learn” over time. Other robots may learn, but not these.
LMS (e.g., Google Classroom, Schoology, Canvas)NoLMSs could support the use of AI software and present information adaptively or use it for grading assignments, but these do not currently implement AI.
IXLNoThis software does some interesting things that seem like they might be AI, but the software doesn’t improve as it interacts with more users.
Siri, Alexa, Ok Google, etc.YesThis software has been trained with lots and lots of voices so it can recognize yours. It also learns to recognize yours better over time.
Facial recognitionYesFace recognition technology is AI, but it is not considered to be very robust meaning that it can easily misidentify people.
Self-driving carYesAs the self-driving car takes more and more test drives, it gets better at driving.
Carnegie Learning’s MATHiaYesMATHia is Carnegie Learning’s online software program that deploys artificial intelligence to actually teach math. By providing targeted coaching and adapting to student thinking, MATHia mirrors a human tutor with more complexity and precision than any other math software.
GrammarlyYesGrammarly’s products are powered by an advanced system that combines rules, patterns, and artificial intelligence techniques like machine learning, deep learning, and natural language processing to improve your writing.
Adaptive Computer-based TestingMaybeMight or might not depending on the software. Stay tuned for more on this in a future post!

What’s up next? We have lots more to share, including sharing AI projects from CIRCL during the CIRCL Educators Summer of AI!  We’ll also tackle some of the big questions educators have about AI like:

  • When will AI matter to me? How could AI make teacher learning more relevant, valuable, or effective?
  • Should I be worried that AI will replace me? What is the ideal balance between human and machine?
  • What needs to be considered so AI can help teachers support different races, cultures, genders, and students with different abilities in ways without bias (or with less bias)?

I want to give Pati Ruiz, Judi Fusco, and Patti Schank a thank you for their thinking and help with this post. An additional thank you goes to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

Tweet @CIRCLEducators and let us know if you have questions or thoughts about AI.

2019 Video Showcase

By Sarah Hampton

You know how excited you get when a movie you can’t wait to see is finally released? That’s my current state of mind because the 2019 STEM for All Video Showcase will be here May 13-20!

The showcase features short videos from over 200 innovative STEM+C education projects. It’s one of my favorite ways to learn about cutting edge research and interact with researchers themselves. Even more, it’s a great way to spark ideas for my classroom and bolster my collection of STEM resources and tools. Plus, the showcase is practically designed with filters that allows me to browse videos by the relevant age bands and topic.

While many of the tools I’ve learned about are still in beta phases, some of the projects have become go to resources for me. For example, I first learned about PhET Simulations through this showcase video and can’t imagine teaching math and science without them now. Even when the resources I learn about haven’t become classroom staples, they’ve been perfect for those one off situations. For example, I was asked to teach a life science unit just a few weeks ago while another teacher was out. I don’t normally teach life science, but I remembered seeing a video about a genetics simulation called Geniverse and decided to try it. I was thrilled with how well it helped the students see how traits are passed down! I’m sure I could have explained it with whiteboard and markers, but I doubt the process would have been as interesting and individualized as the simulation. I can’t tell you how many times I’ve been reminded of a showcase video which led me to incorporating some element of it in my classroom!

Sometimes the videos are so motivating that they make me want to change how I teach a unit or my pedagogy in general. That was the case after I watched this video about IC4 which inspired me to design two physical science units around United Nations Sustainable Goal #6: Ensure Access to Water and Sanitation for All. My students monitored our campus stream and partnered with EarthEcho International to report our findings on their international database. They also designed and built gravity fed water filters, a water desalination device, and a cross between a fog net and harp (two tools to harvest fog) to brainstorm how people worldwide might be able to gain better access to clean and safe water. Not only were the students able to think deeply about physical science topics because of the project, they were also exposed to topics that are more traditionally considered to be earth and life science concepts and, most importantly, they learned in context of something that matters locally and globally. (You can read about my early thoughts after watching IC4 in this post to see how I progressed from reflection to action.)

I like to think of the time I invest in the showcase as adding ideas and tools to my teaching toolbox. (Bonus: the ideas and tools are free because the projects are federally funded!) Some tools will be like screwdrivers and hammers because I know I will use them frequently. Other tools will be like a car jack or a paint sprayer–I won’t need them often, but there’s nothing better when those situations come up. I can’t wait to see what this year’s videos add to my toolbox and hope you find the STEM For All Video Showcase as useful as I have. Let me know what you find interesting by tweeting @CIRCLEducators!

When to Collaborate

By Sarah Hampton

With all the benefits of collaboration I’ve shared with you in the past two posts, should we drop direct instruction altogether and completely restructure our classrooms around collaboration? As it turns out, there are better times to collaborate than others, so don’t throw away your podium just yet. Even proponents of collaborative learning have said “We do not see any reason to develop pedagogical methods which exclusively rely on group activities. Individual reflection is required in order to transform experience into learning and class-wide activities are especially valuable when it comes to structuring the informal knowledge that [previously] emerged.” (from The Mechanics of Computer-Supported Collaborative Learning Macroscripts.) So when should we design collaborative activities for our classrooms?

Have you ever been asked to collaborate on a task that was so simple that figuring out how to involve another person took more work than just doing it yourself? Collaboration is a very powerful activity, but it works best when you need the power of another person’s brain to help you. Otherwise, the overhead of working with another person is not worth it. So it only makes sense to make the effort of figuring out how to work with another person when the instructional task is very large or complex. (To learn more about why and how this works, check out the collective working-memory effect. It is so incredibly cool! Basically, you can leverage others’ working memories to extend your capacity as a group in a synergistic way so the total really is more than the sum of its parts!)

Let’s take a look at three sample classroom activities from A framework for analyzing cognitive demand and content-practices integration: Task analysis guide in science and decide whether or not collaboration is beneficial for the task based on its complexity.

Task 1 Low Cognitive Demand Task

crossword

Sample classroom activity from A framework for analyzing cognitive demand and content-practices integration

This is a classic example of a low cognitive demand activity–memorization. Students only need to acquire and recall factual knowledge to succeed in the task. In an article titled Collaborative Learning Enhances Critical Thinking, Gokhale demonstrated that lecture followed by individual “drill and practice” was equally as effective as lecture followed by collaborative learning groups in gaining factual knowledge. Because this is a low cognitive demand task, taking the time and effort to collaborate isn’t worthwhile. You could argue that it would be more efficient to team up with other students and “divide and conquer” the clues. While I don’t think that would be as effective if the learning goal is for each student to be accountable for the information, I agree that it would complete the task more quickly. Even so, remember that grouping up only to divide responsibilities is an example of cooperative learning, not collaborative learning.

Task 2 Low Cognitive Demand Task

Sample classroom activity from A framework for analyzing cognitive demand and content-practices integration

This is another example of a low cognitive demand activity. The student can follow the steps from the example to compute the speeds without having to make any sense of the underlying math or science ideas. In fact, there might as well be no context about the dog or runner or baseball because it’s all irrelevant to the task. (Side note for math teachers: Robert Kaplinsky has an article about imposter “real-world” problems called Beware of Fake Math Modeling Problems that’s worth a read!) You could still make a case that this kind of task is worthwhile for your classroom if your instructional goal is to develop a sort of automaticity in your students, “when I see a speed problem, I know I need to divide the distance by the time.” If you want students to memorize and apply the formula, then this might be your ticket. (Please note, though, that students might know what to do, but they probably won’t understand why they need to do it unless they’ve also done some higher level thinking about it.) Because Task 2 is another low cognitive demand task, it can be completed more efficiently individually.

Task 3 High Demand Cognitive Task = Good Collaborative Task

Guided Content

Sample classroom activity from A framework for analyzing cognitive demand and content-practices integration

This task goes well beyond acquiring or recalling factual knowledge and asks students to think critically about their comprehensive knowledge of a topic while they evaluate a new source of information. This is an example of a high demand cognitive task and therefore is a good candidate for collaborative activity. Students will almost certainly benefit from discussing their thoughts with others as they clarify and strengthen their arguments. According to Less is more: Teachers’ influence during peer collaboration, “When a student models a strategy that makes an argument more convincing or makes the discussion run more smoothly, other group members are stimulated to appropriate the strategy.” In fact, in their study and several others like it, students were more likely to benefit from hearing other students think aloud and seeing other students model strategies than anything the teacher said or did! That makes tasks like this the sweet spot for collaboration.

In summary, students need to need the power of each other’s mental capacities to collaborate, and this usually happens during high cognitive demand tasks. Collaboration is a powerful activity, so give your students opportunities to work on large and/or complex tasks that will make it worth their while.

When have you found collaborative activities to be effective in your classrooms? Do you agree or disagree with my assessments above? We would love to start a collaborative reasoning discussion so we can all learn better together. Tweet @circleducators to join the conversation!

Learn more:

Task Complexity as a Driver for Collaborative Learning Efficiency: The Collective Working-Memory Effect

The Mechanics of Computer-Supported Collaborative Learning Macroscripts

A framework for analyzing cognitive demand and content-practices integration: Task analysis guide in science

Collaborative Learning Enhances Critical Thinking

Beware of Fake Math Modeling Problems

Less is more: Teachers’ influence during peer collaboration

Mathematical Tasks Framework – Task Analysis Guide

4 students in The International Community for Collaborative Content Creation

The International Community for Collaborative Content Creation (IC4)

by Sarah Hampton

In my last post, we talked about how much there is to effective collaboration and discussed some of the things we should promote during collaborative activities. In this post, I will share how one cyberlearning project is capitalizing on all my favorite aspects of collaboration.

A few months ago, I had a chance to facilitate for the 2018 STEM For All Video Showcase, an online collection of very short videos from federally funded projects that aim to improve STEM education. I really enjoyed thinking deeply about my assigned videos and having conversations with the researchers involved, and, since then, I’ve enjoyed watching several more of the videos outside my group. One of these projects has continued to hold my thoughts. It’s called the International Community for Collaborative Content Creation (IC4). In this project, students from different countries collaborate online to create a media presentation, most often a video, that explains a STEM topic to their peers. The groups work across national, cultural, and ethnic boundaries to create these artifacts using tools such as Google Hangouts, Skype, Slack, and iMessage to communicate. Several things about this project are appealing to me:

  1. Students are reflecting deeply on STEM topics, deeply enough to be able to explain them to others. The project team calls this “participatory teaching”. As Einstein said, “If you can’t explain it simply, you don’t understand it well enough.” So, the expectation for meaningful content comprehension is embedded in the project. Furthermore, Project Lead, Eric Hamilton observed that, “If you combine helping people with learning, then instead of being in class to do well on a test, you instead are learning to help yourself and others succeed. The result can be transformational.”
  2. The researchers place an emphasis on the process of finding and negotiating shared meaning. Coming to a shared understanding through interaction and reciprocal sense-making is called co-construction. Co-construction can result in a visible outcome like a jointly created physical product and/or an invisible outcome like a more sophisticated way of thinking about something. The students involved in IC4 are not passively receiving knowledge. They are actively co-constructing their understanding of STEM topics as they grapple with them in conversations with others and as they co-construct digital media artifacts. I highly recommend Learning by Collaborating: Convergent Conceptual Change, Co-constructivism in Educational Theory and Practice, and From Intersubjectivity to Group Cognition to learn more about how this plays out during collaboration and what it brings to the learning process.
  3. The participants represent fundamentally distinct cultures, countries, economic, and social backgrounds. People tend to consciously and/or subconsciously choose to socialize with others who are similar to themselves. (That tendency is called homophily if you’re interested in googling a term to learn more.) At the same time, research tells us that diverse groups routinely outperform their homogeneous counterparts. And it doesn’t only benefit the group, it benefits the individuals, too.
  4. Students are working on STEM problems that matter. One teacher said that initially students selected their own STEM topic, but in a more recent iteration, they were asked to choose topics from the UN Sustainable Development Goals. Too often, we ask our students to complete tasks that carry no real-world meaning, and I know that our world has untapped intellectual capital in our students – I see it every day! Why not allow our students to apply what they’re learning in their subjects to work toward solutions for ambitious real world challenges? After all, they are the ones who will inherit them.

Because I’ve been so encouraged by the nature of this project, I keep thinking about how it could be implemented on a larger scale, and like many projects that have piqued my interest, I am frustrated by the very real obstacles that would make that challenging. Right now students are participating in this project in club settings. I’m sure part of that is because of the difficulty coordinating online meetings in different time zones; parts of the project occur synchronously and other parts asynchronously. But I would love to see this become accessible for all students as part of their everyday classroom experiences. However, teachers are so constrained by their national and state mandated learning objectives that there isn’t much time for long-term projects like these. In my opinion, this is an absolute shame! I feel like we are sacrificing more important goals (international cooperation, shared meaning making and problem solving with diverse peers, and the UN sustainable development goals) for more immediate and measurable ones (subject/verb agreement and fraction operations). I’m not saying the latter goals are unimportant, but rather I am saying that there has to be a way to teach and assess the latter in the context of the former. We as teachers need to feel like we have the time, permission, funding, and support to pursue both goals during the school day. Otherwise, meaningful and ambitious projects like this will not be able transform education at the scale I think it has the potential to do.

Stepping beyond my teacher role for a moment, as a parent, I want this kind of learning experience for my sons. I want them to engage in real and significant problems with people they otherwise wouldn’t have access to without social media and a digital makerspace. As a parent, I would be willing to sacrifice three to six weeks of standard educational fare for that kind of experience. I remain encouraged by the fact this project is active and federally funded. Despite the lag between current educational research and widespread current education practices, I hope this suggests we’re headed in the right direction. I tend to be a wee bit impatient sometimes, so my husband has to frequently remind me that you only make slight adjustments to the course when you’re steering a big ship. I just hope that by the time my boys are in middle school this is the course we’ll be on.

Since some of my favorite aspects of this project are co-construction, diverse participants, and working on challenges that matter. I would love to hear your take on the project and your reactions to my concerns about the obstacles to running this through the classroom. Let’s see if we can negotiate some shared meaning online just like these students are doing. What aspects of the project appeal to you? What obstacles would prevent you from doing something similar during your school day?

Learn More

2018 Stem for All Video Showcase

IC4 2018 Showcase Video

IC4 Website

CIRCL Perspective on Project Lead, Eric Hamilton

Co-constructing Shared Meaning

Learning by Collaborating: Convergent Conceptual Change

Co-constructivism in Educational Theory and Practice

From Intersubjectivity to Group Cognition

Homophily

Homophily: Measures and Meaning

Homophily and Ethnic Background in the Classroom

Benefits of Diversity

The Benefits of Socioeconomically and Racially Integrated Schools and Classrooms

The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies

The Truth about Diverse Teams

Groups of Diverse Problem Solvers Can Outperform Groups of High-ability Problem Solvers

Why Diversity Matters

Why Diverse Teams are Smarter

How Diversity Makes Us Smarter

Broadening Youth Participation in Computer Science & Engineering

UN Goals

UN Sustainable Development Goals

Unpacking Collaboration

By Sarah Hampton

Collaboration. We all know that means working together, and we all know it’s an educational buzzword with a positive connotation. It’s one of those words that I kind of gloss over when I see it in a paper or blog. My brain kind of does this disengage thing like, “I get that concept. It’s old news. Moving on.” Well, you know when you ask your students if they fully understand a concept, the infamous question, “Does that make sense?” and they answer, “yes,” and you know they don’t?  I didn’t fully get the concept of collaboration; I was doing exactly what my students do, saying “yes” and moving on. As I read this summer, I learned that collaboration may be something we talk about often, but there’s a lot more there than I thought. I know I’m definitely not ready to move on!

What is collaboration, exactly?

In the book, What do you mean by collaborative learning?, Pierre Dillenbourg humorously points out that, “When a word becomes fashionable – as it is the case with “collaboration” – it is often used abusively for more or less anything.” So what is it, exactly? At its core, collaboration is two or more people working together, but this can be deceptively simple. For example, collaborative learning shouldn’t be confused with cooperative learning in which students work together by dividing up tasks between team members and working independently. In collaborative learning, students must be mutually engaged in a “coordinated effort to solve the problem together.” Furthermore, merely asking students to “work together” is not enough to lead to positive learning outcomes, so teachers must be intentional about identifying and facilitating effective collaboration. (Tips for that in a minute!) On the other hand, students who are effectively collaborating may not even be in the same room together thanks to modern technology. I like how Mary Burns says it in Edutopia’s blog, 5 Strategies to Deepen Student Collaboration:

“In collaborative activities, we want to ensure that students don’t just occupy the same physical space but that they share an intellectual space—that they learn more, do more, and experience more together than they would alone.”

Why collaborate?

I knew collaboration was supposed to be good for learning, but I was surprised to see the number of documented benefits. In the Benefits of Collaboration, Laal and Ghodsi (2012) discuss collaborative learning (CL) and organize the results from multiple studies into social, psychological, and academic categories:

Quoted from pages 487-488 of Laal and Ghodsi (2012): Social benefits CL helps to develop a social support system for learners CL leads to build diversity understanding among students and staff CL establishes a positive atmosphere for modelling and practicing cooperation CL develops learning communities. Psychological benefits Student-centered instruction can increases students' self esteem Cooperation often reduces anxiety--everything is easier with a friend! CL can develops positive attitudes towards teachers Academic benefits CL Promotes critical thinking skills Involves students actively in the learning process Classroom results are improved Models appropriate student problem solving techniques Large lectures can be personalized

How can cyberlearning help?

I suspect what teachers (including me) have often called collaboration didn’t really hit the mark, and maybe we haven’t recognized collaboration when it was happening in other situations. Let’s take another look at a cyberlearning project we’ve talked about before to learn what’s going on during effective collaboration. Check out this post on Speech-Based Learning Analytics for Collaboration (SBLAC) to learn more about the project. In this video, the leader of SBLAC, Cynthia D’Angelo, talks about things teachers can look for during collaboration.

In your own classroom, you can look to see if everyone in a group is contributing to a new understanding or if one person (or a small number of the group) is doing the work. Good indicators include seeing group members verbalizing about what is confusing or talking through what makes sense. Making thoughts visible to others (e.g., saying what you are thinking or sharing in writing, a sketch, or a model) is a very important indicator that collaboration is occurring. You could even make your own rubric or checklist for what you are looking for as you walk around when groups are working together. Sharing this rubric or checklist with your students might help them collaborate better.

In the near future, I hope to see more projects like SBLAC that automatically code these indicators. It would be much more efficient to allow technology to streamline that process so we could focus on giving our students targeted interventions at optimal times.

What do you think? Did anything about the specifics or benefits of collaboration surprise you? Would you say you frequently use true CL activities or are you hoping to facilitate more for your students? Would you be excited or intimidated to use a tool like SBLAC in your classroom? How do you know if students are working well together? Leave us a comment–we would love to collaborate with you as we come to a better understanding of CL together!

I would like to give a special thank you to Judi Fusco for her time and endless patience as she recommended readings and discussed collaboration with me. Because of her, I experience the value of collaboration firsthand.