Category Archives: EducatorCIRCLS

Educator CIRCLS 2023 Summer Session Series

During the summer, Educator CIRCLS held a series of informational and discussion sessions in preparation for the CIRCLS’23 Convening. These sessions provided practitioners with the opportunity to learn about emerging technology for learning and think about what that might look like in their classroom or school context!

Please see information about each session. Each session will include the presentation slides and recordings.

July 12th, 3-4PM ET OR July 13th, 12-1PM ET: Informational Session 1 – What is CIRCLS and the CIRCLS Convening?
Learn about CIRCLS and Educators CIRCLS! This first session introduces the center, the summer series, and in-practice educators who are in the CIRCLS community.

Presentation Slides | Downloadable Slides | Webinar Recording

July 18th, 2-3PM ET: Overview of Emerging Technologies and AI in Education
Get an overview about issues and research in AI in Education. The session was based on the recent report on Artificial Intelligence and the Future of Teaching and Learning from the Office of Educational Technology.

Presentation slides | Downloadable Slides | Webinar Recording

July 20th, 1-2PM ET: Assessment, Identity, and Agency
What is the role of AI when grading, and how does it affect both student and teacher identity and agency? Attendees shared their experiences on how AI grading systems have affected how they and their students who have used these systems feel.

Presentation Slides | Downloadable Slides | Webinar Recording

July 25th, 2-3PM ET: Collaborative Learning and Community Building
What does it look like to leverage AI for collaborative learning? Attendees learned about and shareed their experiences with building community with their colleagues and in small groups in the classroom.

Presentation Slides | Downloadable Slides | Webinar Recording

July 27th, 1-2PM ET: Equitable and Ethical Practices and Interactions
What does AI neglect when it comes to the needs of historically excluded populations, both among students and teachers? Attendees learned about the AI Bill of Rights for Teachers and shareed how they use or can use it in their classrooms.

Presentation Slides | Downloadable Slides | Webinar Recording

August 1st, 2-3PM ET OR August 3rd, 1-2PM ET: Informational Session 2 – What to expect at the CIRCLS ‘23 Convening
In this second informational session, attendees are invited to drop-in to ask about the CIRCLS ‘23 Convening and learn how to apply. Attendees shared their thoughts about what they would like to see at the convening.

August 2nd, 1-2PM ET: Social Robotics
Learn about the design of intelligent robots with social behaviors and their potential roles in learning settings. Attendees discussed how they see robotics technologies fitting into their classroom.

Presentation slides | Downloadable Slides | Webinar Recording

August 15th, 2-3PM: Learning and Productivity
Teacher and student insight and feedback in AI-backed educational technologies development is crucial for building effective, trustworthy implementations. Hear about student and teacher recommendations and share yours, plus think about what we as a community can do next.

Presentation slides | Downloadable Slides | Webinar Recording

Artificial Intelligence and Education: What We’re Up To

notebook, lightbulb, atom, pencils inside a digital circuit board image

by Pati Ruiz

I was recently asked for an overview of the AI and Education landscape and how we are participating in it. In addition to promoting equity and accountability in AI, here is a summary of our recent writing and research including key ideas from our work. We believe that AI systems should support and augment, but never replace, a human. To ensure this, emerging technology systems and tools should be developed with the input of educators, learners, and families. As always, please share your thoughts with us @EducatorCIRCLS.

Writing and Presentations

AI and the Future of Teaching and Learning | A blog series we partnered on with the U.S. Department of Education’s Office of Educational Technology

Key Ideas:

  • Educational technology is evolving to include artificial intelligence.
  • Artificial intelligence will bring “human-like” features and agency into future technologies.
  • Policy will have an important role in guiding the uses of artificial intelligence in education to realize benefits while limiting risks.
  • Artificial intelligence will enable students and teachers to interact with technology in human-like ways.
  • Individuals will find it difficult to make choices that balance benefits and risks.
  • Creating policies can strengthen how people make decisions about artificial intelligence in education.
  • Educational applications of many types will be artificial intelligence-enabled, including teaching and learning, guiding and advising, and administration and resource planning applications.
  • Use of artificial intelligence systems in school technology is presently light, allowing time for policy to have an impact on safety, equity, and effectiveness.
  • Policies should encourage teacher engagement, including the development of teachers’ trust, and their confidence to recommend not using untrustworthy artificial intelligence systems.
  • Policies should incorporate experiences for educators to shape and support their own professional learning about how to utilize artificial intelligence systems in teaching and learning.
  • Including and informing educators in design and development decisions will result in more useful and usable teacher supports.

AI or Intelligence Augmentation for Education? | Communications of the ACM 

Key Ideas:

  • We recommend a focus on intelligence augmentation (IA) in education that would put educators’ professional judgment and learners’ voice at the center of innovative designs and features.
  • An IA system might save an educator administrative time (for example, in grading papers) and support their attention to their students’ struggles and needs.
  • An IA system might help educators notice when a student is participating less and suggest strategies for engagement, perhaps even based on what worked to engage the student in a related classroom situation.
  • We hope that IA for education will focus attention on how human and computational intelligence could come together for the benefit of learners.

Artificial Intelligence 101: Covering the Basics for Educators | Digital Promise Blog

Key Ideas:

  • AI lets machines make decisions and predictions.
  • Teachers are essential to education, and AI should be used to better support them.
  • Technology often comes with ethical implications and AI is no different, educators should ask questions and investigate AI tools and systems before they adopt them into a classroom.

Teachers Partnering with Artificial Intelligence: Augmentation and Automation | Educator CIRCLS Blog 

Key Ideas:

  • Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue.
  • AI systems should support or augment, but never replace, a teacher.
    These systems should be developed with the input of teachers, students, and families.

Artificial Intelligence and Adaptivity to Strengthen Equity in Student Learning | Getting Smart

Key Ideas:

  • Educators, researchers, and developers prioritize adaptivity when it comes to emerging learning technologies.
  • Incorporating AI tools requires specific and precise inputs to generate useful outputs.
  • When practitioners, learners, researchers, and developers work together with shared values, more equitable learning is possible.

Ethical AI | EngageAI Nexus Blog 

Key Ideas:

  • Ethical considerations should be front and center throughout the development of any new AI innovation, and ethics should be central to our definition of success for AI.
  • Policies and guidelines from the government, accreditation requirements in education, and standards of professional ethics are all needed to reinforce ethics in AI.
  • Public education is also important so that end-users can make informed decisions based on a full understanding of key issues such as transparency and privacy.

Definitions | Glossary of Artificial Intelligence Terms for Educators: A glossary written for educators to reference when learning about and using artificial intelligence (AI).

Presentation | Insights on Artificial Intelligence and the Future of Teaching and Learning at the 2023 Consortium for School Network (CoSN) Conference.

Listening Sessions | AI and the Future of Learning: Listening Sessions | We supported the U.S. Department of Education’s Office of Educational Technology listening sessions about Artificial Intelligence (AI). We connected with teachers, educational leaders, students, parents, technologists, researchers, and policymakers to gather input, ideas, and engage in conversations that will help the Department shape a vision for AI policy that is inclusive of emerging research and practices while also informed by the opportunities and risks.

Ongoing Research

Emerging technology adoption framework: For PK-12 education | Educator CIRCLS Emerging Technology Advisory Board

Key Ideas:

  • A framework we co-developed with education community members to help ensure that educational leaders, technology specialists, teachers, students, and families are all part of the evaluation and adoption process for placing emerging technologies (including artificial intelligence and machine learning) in PK-12 classrooms.
  • We are currently working with League member Willy Haug, Director of Technology and Innovation to modify this framework for adoption at Menlo Park City School District.

Study | ChatGPT/GPT-4 for Developing Sample Computational Thinking Lesson Plans at North Salem School District

  • I am working with Dr. Julio Vazquez, Director of Instruction and Human Resources North Salem School District, who is working with his team to develop sample computational thinking lessons across all subject areas K-12 using ChatGPT. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process.

Artificial Intelligence Practitioner Advisory Board | A group that will explore the use of emerging technologies in classrooms, and how we might leverage technologies to better support educators and their students. We hope to foster a sense of community within the group where researchers and developers can learn along with you as we all go through the process of reviewing technologies and making recommendations on their use. This Practitioner Advisory Board is supported by two NSF projects:

Developing Sample Computational Thinking Lessons with ChatGPT

by Pati Ruiz, Merijke Coenraad, and Judi Fusco with contributions from Julio Vazquez

What is ChatGPT?

Let’s start with some definitions, ChatGPT is commonly classified as a natural language processing model, meaning it deals with language and human speech patterns, and “generative artificial intelligence”, meaning that it is AI that creates new content — in this case, new text.

More specifically, ChatGPT is a chat-based generative pre-trained transformer. Meaning that the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer). Basically, it’s a chatbot that allows a user to ask a question in plain language and get a response in a way similar to how a human would reply.

What does this mean for education?

“ChatGPT is a good prompt for conversation.
I see this tool as a starting point for teachers and students.”
-Julio Vazquez, North Salem Central School District

Despite the precedent of banning access to ChatGPT set by New York City Public Schools in January 2023, not all school districts are following suit. Some educators believe that these AI systems and tools are out in the world and the best thing educators can do is to teach students to partner with AI tools so they can be better prepared for a technological world. For example, English teacher Cherie Shields was recently interviewed by the New York Times where she shared that she assigned students in one of her classes to use Chat GPT to create outlines for a recent essay assignment. She shared that the process helped deepen students’ understanding of the stories while also teaching them to interact with an AI system by manipulating their inputs to get the responses they were looking for. In this case, ChatGPT became a tool that can support learning when we thoughtfully include it in our lessons and also guide students in using it well.

Dr. Julio Vazquez, Director of Instruction and Human Resources, and his team are encouraging experimentation and access to ChatGPT for all faculty and staff and are thinking about how to provide students with access in a manner that will not conflict with student privacy laws. Staff members are rolling their sleeves up and starting to explore and learn about how they can use it with their students productively. In fact, they are exploring the use of ChatGPT to develop sample Computational Thinking (CT) lesson plans that the team uses as a jumping off point in their CT Pathways development process.

ChatGPT for Developing Sample Computational Thinking Lesson Plans

compass pointing north
North Salem Central School District

In a recent conversation with Dr. Vazquez, we asked him more about how he and his teachers are incorporating ChatGPT in their computational thinking lesson planning process.

Dr. Vazquez and his colleague Cynthia Sandler, Library Media Specialist, started by entering prompts into ChatGPT and seeing what came out. The searches started with prompt terms that went something like “generate a 5th grade lesson for computational thinking focusing on science.

As the team began to analyze the lesson plans that came out, they realized they needed to make adjustments. Julio shared that he and his team have become better at giving ChatGPT enough context so that the lessons that are developed are closer to what the team expects of a lesson plan and the content better aligns to both CT and content area standards. For example, a more recent lesson prompt terms included:

“write a science lesson that integrates
9-12.CT.1
Create a simple digital model that
makes predictions of outcomes. and HS-PS1-5. Apply scientific principles and evidence to explain how the rate of a physical or chemical change is
affected when conditions are varied.”

The prompt terms and outputs were documented and provided a good starting point for sparking conversation. On first pass, the team collectively agreed that they liked the structure of the generated lesson plans. Beyond format and in terms of the content of computational thinking and subject area standards, the prompt terms entered into ChatGPT also included Habits of Mind, thinking dispositions which are implemented in North Salem, as well as the use of Visible Thinking Routines.

Dr. Vazquez and his team have worked with ChatGPT to develop sample computational thinking lessons across all subject areas K-12. These lessons are not meant to be implemented in the classroom “as is,” but rather, these sample lessons are to be used as a first draft, a starting point for consideration and conversation in North Salem. Teachers will vet the lessons for accuracy and then iterate and improve them in order to meet the learning needs of their students. Given the need for high-quality, integrated computational thinking lessons we will continue to work with Dr. Vazquez and his team at North Salem to learn more about how they are integrating ChatGPT in their work and their vetting process. We look forward to sharing more! Until then, do you have questions for us? Are you integrating ChatGPT in your classroom, school, or district? Let us know @EducatorCIRCLS.

Teachers Partnering with Artificial Intelligence: Augmentation and Automation

3x2 grid of colorful AI and learning related icons

By Pati Ruiz and Judi Fusco

Artificial intelligence systems are increasingly being deployed in K-12 educational settings and we expect this trend to continue. Our starting point is that AI systems should support or augment, but never replace, a teacher. In order to ensure this, these systems should be developed with the input of teachers, students, and families.

So, what types of AI systems do teachers want to see developed? A group of teachers from the Merlyn Mind Practitioner Advisory Board shared ideas for how AI might help teachers better support their students. One scenario emerged around students who have Individualized Education Programs or Plans (IEPs)1. In this post we will describe how an AI system might support teachers and students by automating:

  1. Planning and Supporting Preferences
  2. Monitoring
  3. Documentation

Planning and Supporting Preferences

First, a teacher could input student instructional plans into the system. Then, the system can review the plans, make recommendations, and send alerts to the teacher when something may not work for a student. In the alert, the system could provide suggestions of adaptations on lessons or assignments based on the needs of each student. For example, an AI system can scan what’s coming up in an instructional unit and alert the teacher that the website they selected does not meet the accessibility standards required by the students in the classroom. A more advanced system could also suggest an alternative option, or even better, search for multiple resources that are appropriate and let the teacher decide what resources are best suited for their students’ instructional needs. In all cases, the AI system is only helping and making suggestions that the teacher may act on.

An AI system can also allow for a broader range of inputs from students for assignments based on their needs. For example, if a student accommodation includes submitting assignments as recorded audio, but the teacher prefers written assignments, an AI system can convert the student’s audio to text so the teacher can review or grade the text. The text-to-speech tool should also allow the teacher to hear the student’s voice for a particular sentence or phrase, for example, if the translation was not successful. Alternatively, if a student needs to hear the teacher’s comments on their assignments instead of reading them, the AI system can convert the text comments into spoken text for the student to hear. To additionally help the teacher, the system might suggest comments that they had written for another student so the teacher can reuse or repurpose them. The system might also remind the teacher of a student’s preference for feedback and if the student prefers verbal feedback, the teacher could read and record the comments for that more personal touch.

Monitoring

To support teachers in providing adequate accommodations for their students, an AI system can monitor student IEP information and make automated recommendations for needed support. For example, the system could identify students who require extended time and either share a list with the teacher or make appropriate adjustments to due dates for individual students in a learning management system. Here, we point out the need for AI systems to be able to interact with other systems or be embedded within them. Additionally, the system must do this in a way that does not expose sensitive information about students to the whole class.

Related to the text-to-speech and speech-to-text ideas discussed above, an AI system can also provide individualized read-aloud capabilities for students who need that support. The system could also remind the teacher to provide tools, like headphones or closed captioning for students who need to listen to content. We firmly believe that AI systems can help by doing things that machines are good at, while continuing to enable teachers to focus on what humans do best—like developing interpersonal relationships and identifying nuanced needs. With these types of automated supports, it is important to ensure that teachers have the ability to make the final decisions about students’ needs and that students have the agency to accept and decline supports as they go.

Documentation

Supporting a classroom with students who have varying needs—whether they are documented in an IEP or not—requires a significant amount of monitoring and reporting on the part of educators. An AI system could support teachers by not only monitoring the individual requirements of students, but also documenting the adjustments and accommodations that were made for each student. This documentation could then be shared with the students’ families to provide a summary of the work that students have accomplished and how they have been supported in completing that work. Of course, a teacher would review and verify that the summary produced by the AI system is accurate and flag any issues with the write-ups that would need to be addressed by the AI design team.

By the end of the instructional unit, teachers would be able to review reports of student progress, identify what worked and what didn’t, and ensure that all students are making meaningful progress. Automating, planning, tracking, and documentation can give a teacher more time to care for students; however, given the various risks AI systems bring, it is crucial that teachers also have the capability to override an AI system when needed.

Risks

The imagined AI system described helps teachers do what they do best by supporting them to ensure their students receive the accommodations they require and then documents those accommodations. Using such systems will come with risks, and AI systems that engage with student IEP data need to have the highest level of data privacy and oversight. As we discussed earlier, educators must be involved—for example, the teacher is in charge of giving feedback, but the system may make suggestions that help the teacher give better feedback. If educator experts are not in the loop, there could be harmful consequences for students. Educators must be diligent and not assume that every accommodation determined by an AI system is correct or the best decision. AI systems lack full context and the ability to make human decisions. Educators must have oversight and be able to verify and approve every decision made by the system.

Educator Voices

This blog post presents an imagined AI system based on conversations with a group of practitioners from the Merlyn Mind Practitioner Advisory Board. We need more teachers and educators involved in these conversations, so please consider this blog post as an invitation to you to connect with us and join the conversation on the future of AI in Education. In addition to Merlyn Mind, if you are interested in getting involved, please visit the links below.

1 An IEP is a legal document in the United States that is developed for all public school children who need special education. It is created by district personnel with input from the child’s guardians and is reviewed every year. For more information see https://www2.ed.gov/about/offices/list/ocr/docs/edlite-FAPE504.html

Enhancing Learning Performance With Microlearning

iPads used by students in school classrooms

image by Arthur Lambillotte via Unsplash

by Courtney Teague, Rita Fennelly-Atkinson, and Jillian Doggett

Courtney Teague, EdD, Deputy Director of Internal Professional Learning and Coaching with Verizon Innovative Learning Schools program based in Atlanta, GA.
Rita Fennelly-Atkinson,EdD, Director Micro-credentials with the Pathways and Credentials team based in Austin, TX
Jillian Doggett M.Ed, Project Director of Community Networks with Verizon Innovative Learning Schools program based in Columbus, OH

What is microlearning?

Microlearning is a teaching and learning approach that delivers educational content in short, focused bursts of information. Microlearning tends to focus on one objective, and the learning doesn’t require more than 1-20 minutes of the learner’s time. Schools and teachers can use microlearning to supplement traditional instruction or as a standalone learning tool. microlearning has been around for a long time–remember those flashcards at kindergarten that helped us learn numbers, the alphabet, and colors? However, schools were largely unaware of how powerful this learning strategy can be for a teacher.

Microlearning has several potential benefits for both learners and teachers. For learners, microlearning can provide a more engaging and interactive learning experience. This type of instruction can also help to reduce distractions for students who become disengaged with unnecessary learning information. For teachers, microlearning can be used to differentiate instruction and address the needs of all learners. Additionally, microlearning can save instructional time by allowing teachers to deliver targeted information in a concise format. Teachers can tailor microlearning content to focus on specific skills or knowledge gaps (Teague, 2021).

Microlearning is flexible and can be accessed anytime, anywhere. Learners can complete microlearning activities on their own time, at their own pace. Microlearning is like a seasoning for learning; it seasons and heats up information to make the process of comprehending new knowledge easier. It has been part-and-parcel in many schools’ instructional strategies since time immemorial, but only recently have we begun paying attention to how powerful this strategy really can be when used correctly.

What does microlearning look like?

microchip being held

image by Brian Kostiuk via Unsplash

Microlearning can come in many forms. Below is a list of 10 microlearning examples:

  1. Short, Focused Videos
  2. Infographics
  3. Podcasts or Audio Recordings
  4. Social Media Posts and Feeds
  5. Interactive Multimedia
  6. Animations
  7. Flashcards
  8. Virtual Simulations
  9. Assessment Activities: Polls, Multiple-Choice Questions, Open Response Questions
  10. Games

How can teachers use microlearning effectively to maximize content retention, personalize learning experiences, and bolster student engagement?

Use microlearning to Active Student Prior Knowledge and Generate Excitement for New Learning

Assign microlearning, such as a self-paced learning game, to assess and activate prior knowledge around a topic. Or place a few bite-sized learning opportunities about an upcoming lesson in your Learning Management System (LMS) for learners to preview beforehand to generate interest and excitement for new learning.

Use microlearning to Personalize Learning Experiences

Creating microlearning in various formats covering multiple topics gives learners the agency to make meaningful choices about their learning paths. For example, to learn a new concept or build new skills, learners can choose to engage with an interactive image, listen to a short audio guide, participate in a learning game, or watch an explainer video or animation. Additionally, learners who need remediation or want to extend their learning can quickly access content to review a topic again or complete additional microlearning lessons.

Use microlearning to Encourage Communication and Collaboration

Create different microlearning bites, each covering a specific objective or portion of a learning goal. Assign each student to engage with one microlearning bite and then use the Jigsaw method to have learners learn about a new topic in a cooperative style. Similarly, you can assign microlearning that includes thought-provoking, probing questions and have learners discuss on a discussion forum or by recording and responding to each other’s short video or audio responses.

Use microlearning to Engage Families and Caretakers

Distribute microlearning to learners’ families and caretakers to help them quickly learn content learners are learning in class to support them in taking an active role in their child’s learning at home.

Use microlearning to Reduce Time Spent Grading

Create microgames and assessments using tools that automatically grade and provide learner analytics to reduce the time spent grading. For example, create an interactive video with embedded questions, a short quiz on your LMS, or a learning game that automatically grades learners’ responses and provides you with learner analytics you can use right away to inform just-in-time teaching.

Use microlearning to Build Classroom Community

Use microlearning to Promote Learning Outside of School

Over time, create and curate a repository of microlearning assets, such as explainer videos, audio recordings, infographics, learning games, trivia quizzes, flashcards, etc., on your Learning Management System (LMS). Then, learners can easily access and continue their learning outside of school, cultivating a life-long learning mindset.

How to assess microlearning?

The flexibility of microlearning allows for an abundance of possibilities in how it is assessed. For example, if your goal is simply to educate people about a new process using a video, then you don’t have to assess, you can simply measure the reach by the number of views and effectiveness by the level of adherence to the new process by a specific date. If your goal is to educate people about the available services, then your performance indicator might be the use of those services. In other words, you have a license to be creative and to assess learning effectiveness in many different ways.

More formally, the evaluation of learning can be categorized into two types: assessments and indicators (Fennelly-Atkinson & Dyer, 2021). Assessments include most formal and informal methods of evaluating learning, which include surveys, check-ins (i.e. verbal, data, progress, etc), completion rates, knowledge checks, skill demonstration observations, self-evaluations, and performance evaluations. Meanwhile, indicators include indirect measures such as performance, productivity, and success benchmarks. Which type you use is largely dependent on the learning context and need. The key questions to consider are the following:

  • What measurable change is the microlearning impacting?
  • Do you need individual, organizational, or both types of data?
  • What is the ease of collecting and analyzing the data?
  • Can existing evaluations or indicators be used to measure the impact of learning?

What are the drawbacks of microlearning & how to mitigate them?

Microlearning does have some potential drawbacks. For one thing, it can be easy for learners to become overwhelmed by the sheer volume of micro-lessons that they are expected to complete. Additionally, microlearning can sometimes result in a fragmented understanding of a topic, as learners are only exposed to small pieces of information at a time. Microlearning often does not provide an opportunity for learners to practice and apply what they have learned. However, these potential drawbacks can be avoided or mitigated when microlearning is designed into learning activities. Another potential drawback of microlearning is that it can be difficult to maintain a consistent level of quality control. With so much content being produced by so many different people, it can be hard to ensure that all of the material is accurate and up to date. This problem can be mitigated by careful selection of materials and regular quality checks. Because of this, microlearning can create a significant amount of work for teachers. In order to properly incorporate microlearning into their classrooms, teachers need to have a good understanding of the material and be able to effectively facilitate discussion and debate. While it may require some additional effort on the part of teachers to do microlearning, it feels worth it as it has the potential to significantly improve student engagement and learning outcomes.

Which tools can you use to create microlearning?

While microlearning does not necessarily require the use of digital tools, the reach and potential of these types of learning experiences is magnified by technology. Because microlearning is so short and usually discrete, there are many types of tools and methods of delivery that can be used. Formal authoring tools such as LMSs and Articulate can be used, but are not required. Any type of tool that can create a static or dynamic piece of content can be used. Further, any type of delivery system can be used to disseminate the learning. Making microlearning relevant and specific to the learning context, environment, and audience are key to selecting a content creation tool and delivery systems (Fennelly-Atkinson & Dyer, 2021).

Summary

To wrap it up, microlearning is breaking down and chunking learning into bite-sized pieces. Microlearning might be small but can have a big impact on powerful teaching and learning. It can take many different forms, which means that there are just as many content-creation tools and delivery platforms. Likewise, there are a variety of ways to assess microlearning depending on the goal and purpose for its use. There is no one correct way of creating microlearning. Microlearning can be as simple as listening to the pronunciation of words on an audible dictionary online application. Teachers can use this flexible method of microlearning to support research-based instructional practices and personalize learning experiences.

So how might you use this approach to meet the modern learner’s needs? Tweet @EducatorCIRCLS and be part of the conversation.

References

Fennelly-Atkinson, R., & Dyer, R. (2021). Assessing the Learning in microlearning. In Microlearning in the Digital Age (pp. 95-107). Routledge.

Teague, C. (2021, January 11). It’s All About microlearning. https://community.simplek12.com/webinar/5673

Apprentice Learner: Artificial Intelligence (AI) in the Classroom

by Sarah Hampton

One of my favorite things about CIRCLS is the opportunity to collaborate with education researchers and technology developers. Our goal as a community is to innovate education using technology and the learning sciences to give more learners engaging educational experiences to help them gain deep understanding. To reach that goal, we need expertise from many areas: researchers who study how we learn best, teachers who understand how new technologies can be integrated, and developers who turn ideas into hardware or software.

Recently I’ve been reminded of an opportunity when Judi, Pati, and I meet with Daniel Weitekemp in June of 2020. Daniel, a PhD student at Carnegie Mellon University at the time, was developing an AI tool for teachers called Apprentice Learner.

A stacked addition problem demonstrating carrying.

Figure 1. Apprentice Learner Interface that students use when interacting with Apprentice Learner. The user can request a hint or type in their answer and then hit done.

Apprentice Learner looks a bit like a calculator at first glance, so an onlooker might be tempted to say, “What’s so cutting edge about this? We’ve been able to do basic math on calculators for years.” But we need to understand the difference between traditional software and software using artificial intelligence (AI) to appreciate the new benefits this kind of tool can bring to the education table.

In a basic calculator, there’s an unchanging program that tells the screen to display “1284” when you type in “839+445.” There’s no explanation given for how and why the programming behind a calculator works. Yet, for each math problem someone could type in a calculator, there is an answer that has been explicitly programmed to be displayed on the screen.

Contrast a calculator to Apprentice Learner, which uses machine learning (a type of artificial intelligence). No one tells Apprentice Learner to display “1284” when it sees 839+445.” Instead, it has some basic explicit instructions and is given lots of examples of correctly solved problems adding 2 or more columns of numbers. Then it has to figure out how to answer new questions. The examples it is given are called training data. In this case, Apprentice Learner was given explicit instructions about adding single digit numbers and then lots of training data–multidigit addition problems with their answers–maybe problems like “21+43=64,” “49+8=57,” and “234+1767=2001.” Then, it starts guessing at ways to arrive at the answers given from the training data.

The first guess might be to stack the numbers and add each column from left to right. That works perfectly for “21+43,” but gives an incorrect answer of “129” for “49+8.”

Two guesses for adding numbers

The second guess might be to stack the numbers and add each column from right to left. Again, that works perfectly for “21+43.” Unfortunately, that would give an answer of “417” for “49+8.”

The software continues finding patterns and trying out models until it finds one that fits the training data best. You can see below that, eventually, Apprentice Learner “figured out” how to regroup (aka carry) so it could arrive at the correct answer.

A stacked addition problem

So what are the implications for something like this in education? Here are a few of my thoughts.

Apprentice Learner models inductive learning which can help pre-service teachers

Induction is the process of establishing a general law by observing multiple specific examples. It’s the basic principle machine learning uses. In addition, inductive reasoning tasks such as identifying similarities and differences, pattern recognition, generalization, and hypothesis generation play important roles when learning mathematics. (See Haverty, Koedinger, Klahr, Alibali). Multiple studies have shown that greater learning occurs when students induce mathematical principles themselves first rather than having the principles directly explained at the onset. (See Zhu and Simon, Klauer, and Koedinger and Anderson)

However, instructional strategies that promote students to reason inductively prior to direct instruction can be difficult for math teachers to implement if they haven’t experienced learning math this way themselves. Based on conversations with multiple math teacher colleagues throughout the years, most of us learned math in more direct manners i.e., the teacher shows and explains the procedure first and then the learner imitates it with practice problems. (Note: even this language communicates that there is “one right way” to do math unlike induction in which all procedures are evaluated for usefulness. This could be a post in its own right.)

Apprentice Learner could provide a low-stakes experience to encourage early-career teachers to think through math solutions inductively. Helping teachers recognize and honor multiple student pathways to a solution empowers students, helps foster critical thinking, and increases long-term retention. (See Atta, Ayaz, and Nawaz and Pokharel) This could also help teachers preempt student misconceptions (like column misalignment caused by a misunderstanding of place values and digits) and be ready with counterexamples to show why those misconceptions won’t work for every instance, much like I demonstrated above with Apprentice Learner’s possible first and second guess at how multi-digit addition works. Ken Koedinger, professor of human-computer interaction and psychology at CMU put it like this, “The machine learning system often stumbles in the same places that students do. As you’re teaching the computer, we can imagine a teacher may get new insights about what’s hard to learn because the machine has trouble learning it.”

The right training data is crucial

What would have happened if there were only some types of problems in the training data? What if they were all two digit numbers? Then it wouldn’t have mattered if you stacked them left to right or right to left. What if none required regrouping/carrying? Then adding right to left is a perfectly acceptable way to add in every instance. But when all the edge cases are included, the model is more accurate and robust.

Making sure the training data has enough data and a wide array of data to cover all the edge cases is crucial to the success of any AI model. Consider what has already happened when insufficient training data was used for facial recognition software. “A growing body of research exposes divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, Black, and 18-30 years old.” Some of the most historically excluded people were most at risk for negative consequences of the AI failing. What’s important for us as educators? We need to ask questions about things like training data before using AI tools, and do our best to protect all students from negative consequences of software.

Feedback is incredibly advantageous

A flowchart demonstrating different ways a user can give direct input to Apprentice Learner

Figure 2. Diagram of how it works to give feedback to the Apprentice Learner system.

One of the most interesting things about Apprentice Learner is how it incorporates human feedback while it develops models. Instead of letting the AI run its course after the initial programming, it’s designed for human interaction throughout the process. The developers’ novel approach allows Apprentice Learner to be up and running in about a fourth of the time compared to similar systems. That’s a significant difference! (You can read about their approach in the Association for Computing Machinery’s Digital Library.)

It’s no surprise that feedback helps the system learn, in fact, there’s a parallel between helping the software learn and helping students learn. Feedback is one of the most effective instructional strategies in our teacher toolkit. As I highlighted in a former post, feedback had an average effect size of 0.79 standard deviation – an effect greater than students’ prior cognitive ability, socioeconomic background, and reduced class size on students’ performance. I’ve seen firsthand how quickly students can learn when they’re given clear individualized feedback exactly when they need it. I wasn’t surprised to see that human intervention could do the same for the software.

I really enjoyed our conversation with Daniel. It was interesting to hear our different perspectives around the same tool. (Judi is a research scientist, Pati is a former teacher and current research scientist, Daniel is a developer, and I am a classroom teacher.) I could see how this type of collaboration during the research and development of tools could amplify their impacts in classrooms. We always want to hear from more classroom teachers! Tweet @EducatorCIRCLS and be part of the conversation.

Thank you for your time in talking and reviewing this post, Daniel Weitekamp, PhD Candidate, Carnegie Mellon University.

Learn More about Apprentice Learner:

Learn More about Math Teaching and Learning:

How Can AI Systems Support Teachers: 5 Big Ideas from the Learning Sciences

This post was originally published on the Digital Promise website.

By Pati Ruiz and Judi Fusco

The learning sciences study the design and implementation of effective learning environments by drawing on a variety of perspectives across a range of physical, social, and technological spaces1. Learning sciences focuses on human learning and helps individuals achieve their fullest potential and attain 21st-century skills. Because of this focus, the learning sciences should be foundational in the design and development of emerging technologies for teaching and learning. AI systems are an emerging technology that are starting to play a significant role in the redesign of learning environments. To increase our chances of creating successful AI systems for learning, they should be grounded in the learning sciences. We’ll discuss five big ideas from the learning sciences in relation to the design of AI systems: Representation and Supports; Collaboration; How Students Think; Building on Students’ Cultural and Linguistic Assets; Assessment and Feedback. We propose that these big ideas are important as a starting point in the design of better AI systems.

Big Idea 1: Representation and Supports

The learning sciences have found that enabling students to make connections across multiple representations (for example, graphs, writing, images, maps, blocks, etc.) contributes to knowledge construction. Different forms of representation give them a way to make sense of concepts in the best way that helps them construct their knowledge. How can this big idea be used in designing an AI system?

In a classroom where a teacher is fielding questions from students about a specific species of frog, an AI system can support the teacher by quickly searching for and projecting a set of visual representations of the frog that are appropriate for the students and have high-quality information for the teacher and students. When teaching about a metabolic function, an animation might help, and the AI system could share the animation and also point to text or other representations that may help students make connections to understand the process. By giving students and teachers just-in-time support like a relevant animation or engaging follow-up questions, AI systems can support teachers to orchestrate learning experiences by automating tasks (as described above) so teachers can spend more time focused on students. Beyond those types of just-in-time supports, AI systems can further support the engagement of all students in sustained creative work—something that has been a challenging problem in the design of learning environments.

Big Idea 2: Collaboration

The learning sciences have found that learning environments should be designed to foster collaboration and help learners work together to share and make sense of challenging problems. Research points us toward more social and collaborative learning environments.

AI systems could support this big idea by making recommendations for how teachers group students or by giving students themselves prompts that may lead to shared understanding when working in groups without the teacher. Emerging AI technologies might help teachers ask different groups the right guiding questions as the AI system “listens” to what each group is discussing. An AI system that asks questions might also be able to assess the answers to those questions, help students arrive at the same conceptual understanding, and determine when the group is ready for a new task.

Big Idea 3: How Students Think

The learning sciences have found that learning environments should be not only collaborative, but also foster adaptive or personalized learning because there is not a single way to learn and individuals have unique needs when it comes to learning environment designs.

AI systems might support teachers in facilitating this big idea by finding instances of student reasoning for teachers to review based on the analysis of video, audio, or student work. AI systems can also quickly provide insights to teachers about what learning path a student is taking and analytics could help teachers understand how each of their students tends to learn a concept based on their writing, speaking, or movements. A teacher might take the feedback given by an AI system and follow-up with students about their individual learning process and make decisions with them about what to do next. By helping students keep track of how they are practicing and providing scaffolds when they are needed and removing them when a student is ready, an AI system can support students’ unique learning needs.

Big Idea 4: Building on Students’ Cultural and Linguistic Assets

The learning sciences have found that learning and teaching are cultural processes and that we best support learning when we include students’ cultural and linguistic backgrounds as pedagogical assets. This big idea means that AI systems need to support learning environments that enable teachers and learners to address the multiple elements of learning, including identity and culture. To do this, developers need to restructure the assumptions that are made about learners and what they know by keeping both teachers and learners in the loop. For example, AI systems can help personalize the materials for Spanish-speaking students and their parents by translating sections of text, or by providing just-in-time translations so that they can more fully participate in learning experiences. Another personalization could be an AI system where the agent speaks to students and engages them using speech patterns similar to those of the student.

Big Idea 5: Assessment and Feedback

There’s been a lot of discussion around how AI systems can support teachers and students with new types of assessment, such as more stealth assessment, or formative assessment. Formative assessment provides specific information to a teacher about students’ strengths and challenges to help a teacher adapt their instruction to meet students’ needs. Students’ needs vary and they can also be challenging to anticipate—this is where AI systems can support teachers and readers. We won’t get into assessment and feedback more here, but check out this Educator CIRCLS webinar on Assessment or read this post on AI and Formative Assessment to learn more about this big idea.

Looking ahead

These big ideas from the learning sciences should be incorporated into AI systems to create better, user-centered products. In addition, educators need to be involved in the process because they have valuable insights about what is working and not working in ways that complement researchers’ expertise. Merlyn Mind Practitioner Advisory Board member, Grace Magley reminds us that teachers “have to see real benefits, not just new tech” and “they need to be shown how it would work in a classroom full of diverse learners. They need to see benefits for the teacher as well as the learners.”

1Sawyer, R. (Ed.). (2014). The Cambridge Handbook of the Learning Sciences (2nd ed., Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139519526

I’m a Teacher, Will Artificial Intelligence Help Me?

Robot caricature in a yellow circle thinks of 0's and 1's, a teacher in a red heart thinks of people

by Judi Fusco and Pati Ruiz

Artificial Intelligence (AI) systems are becoming more prevalent everywhere including education. Educators often seem to wonder, “What is it?” and, “What can it do?” Let’s address these questions and then discuss why and how YOU should be involved!

What is it and what can it do for teachers?

Artificial intelligence (AI) is a field of computer science that lets machines make decisions and predictions. The goal of AI is to create machines that can mimic human capabilities. To do this, AI systems use many different techniques. You are probably using AI systems every day because they are embedded in our mobile phones and cars and include things like face recognition to unlock your phone, digital voice assistants, and mapping/route recommendations. We’re not going to go into the details of how AI works in this post, but you can read a prior post on AI and check out this glossary of AI terms that might be helpful if you want more background on the topic. In this post, we will focus on examples of AI systems that can help teachers.

Teachers have to do countless tasks, such as lesson planning, teaching, grading mentoring, classroom management, keeping up with technology in the classroom and new pedagogical practices, monitoring progress, and administrative work, all while keeping students’ social and emotional needs in mind. While AI has come a long way since the 1950s when the term was coined and work on Intelligent Tutoring Systems began, it cannot replace a teacher in the classroom. We will share examples of how existing AI systems have successfully helped teachers and reduced their load.

Example: Personalized online math learning software for middle and high school students

Mathia provides coaching to students as they solve math problems and gives teachers a detailed picture of where each student is, as well as suggestions for conversation starters to talk about each student’s understanding. This support allows teachers to spend more time with students focused on learning, while also directly giving the students additional, useful feedback as they solve math problems.

Example: A platform that provides immediate feedback to students and assessment data to teachers

Another AI system that supports both teachers and students is ASSISTments. It is also currently focused on math. For students, it gives assistance in the form of hints and instant feedback while they do math homework. For teachers, it gives information about which homework problems were difficult and what the most common wrong answers were. This can prompt teachers to spend time discussing the problems that students need the most help on, and teachers can be sure to re-teach concepts based on common wrong answers.

In addition to teaching content, when you think about all the things a teacher does in managing their classroom and all the “plates” they must juggle to keep 25, 30, or more students on task, engaged, and learning, you can imagine they could use some support. These next three systems described primarily support teachers.

Example: A digital assistant for teachers

One AI system that helps with classroom management tasks is a multimodal digital assistant specifically developed for teachers with privacy in mind, called Merlyn. Merlyn looks like a small speaker, but does so much more. It allows teachers to use voice and a remote control to control content from a distance. For example, with Merlyn teachers can set timers and switch displays between their laptop, document camera, and interactive whiteboard. Teachers can control a web browser on their laptop and do things like share a presentation, go to a specific point in a video, show a website, or search. This frees them up to walk around the classroom and interact with students more easily.

Other ways AI systems can support teaching and learning

The examples above show three categories of how AI systems have helped teachers and their students. Three more examples include, an AI system that can analyze the conversation from a classroom session and identify the amount that a teacher talked versus a student (i.e. TeachFX). This tool also identifies whether teachers let students build on each other’s thoughts leading to discussions. With the help of this AI system, teachers can work to engage their students in discussions and reflect on their practice.

Grading is another task that is very important but very time consuming. Gradescope, for example, supports instructors in grading their existing paper-based and digital assignments in less time than it normally takes them. It does this by scanning text and sorting similar responses together for the teacher to grade some of each type, the system then “learns” from the teacher, automatically grades the rest, and sends the grading to the teacher for review.

Finally, AI systems that are specialized within a subject matter can allow teachers to set up content-specific learning experiences. For example in the domain of science, Inq-ITS, allows teachers to select digital labs for their middle school students. When completing the assigned digital labs, students learn by doing. Inq-ITS autoscores the labs in real-time and shows the teacher performance updates for each student. A teacher can use the reports to provide the appropriate support to students who need additional help. Inq-ITS also supports students with hints while performing the labs.

Educators Must be Involved in the Design of AI Systems

The AI systems described above, support or augment, but never replace a teacher. We believe that AI systems can help by doing things that machines are good at while having teachers do the things that humans do best.

The AI systems above are also designed by teams that have made education and learning environments the main audience for their systems. They have also included teachers in their design process. There are other AI tools that exist and even more that are being developed to support teachers and students on other activities and tasks, but some don’t have the same focus on education. We think that it’s important that in the design of AI systems for classrooms, educators – the end-users – need to be involved in the design.

Some of the teams that design AI systems for education haven’t been in a classroom recently and when they were they probably weren’t the teacher. To make a technology that works in classrooms requires classroom experts (the main users) to be part of the design process and not an afterthought. When teachers give feedback, they help ensure 1) that systems work in ways that make sense for classrooms in general, and 2) that systems would work well in their specific classroom situations. (We’ll discuss why this is the case in another future blog post.)

A final, yet very important reason for educators to be involved, is that while AI systems can bring opportunities to support teaching and learning, there are also privacy, ethics, equity, and bias issues to be aware of. We don’t want to add anything to your already full plate, but as technologies come into your classroom, you should ask questions about how the system supports students, if the systems were designed for students like your students, what the privacy policies are, and any implications that might affect your students.

We understand that most teachers don’t have a single extra minute but it is crucial to have current teachers in the design process. If you want to learn and think about AI systems, as they become more prevalent, you will become an even more invaluable teacher or technology leader in your school/district. Your voice is important and getting more educators involved makes a more powerful collective voice.

Looking ahead

If you’re still reading this blog, you probably have an interest in AI systems; below we suggest a few places to connect. Teachers are critical to the design of effective AI technologies for schools and classrooms. We hope this post has given you some insights into how AI systems might support you and your students. If you are interested in getting involved, we have some links for you below. Consider this blog post an invitation to you to connect with us and join the conversation; we hope you’ll join us in thinking about the future of AI in Education.

In our next post we will discuss how AI systems informed by learning science principles may help solve problems in learning environments.

Let us know your thoughts @educatorCIRCLS.

Ways to join:
Educator CIRCLS
AI CIRCLS
Join the ASSISTments Teacher Community
Leadership Programs — TeachFX

Book Review: You Look Like a Thing and I Love You

This post was originally published on CIRCLEducators.org in October, 2020.

by Judi Fusco

Box with eyes, arms and feet holds a martini glass and an outline of a heart. Text reads: And I Love You

During CIRCL Educators’ Summer of Artificial Intelligence (AI), I read the book You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place1, by Dr. Janelle Shane. I got the recommendation for it from fellow CIRCL Educator, Angie Kalthoff.

I found the book helpful even though it is not about AI in education. I read and enjoyed the e-book and the audio version. As I started writing this review, I was driving somewhere with one of my teenagers and I asked if we could listen to the book. She rolled her eyes but was soon laughing out loud as we listened. I think that’s a great testament to how accessible the book is.

Teaching an AI

Many of us use AI products like Siri or Alexa, on a regular basis. But how did they get “smart?” In the book, Dr. Shane writes about the process of training machine learning2, systems to be “intelligent”. She tells us how they certainly don’t start smart. Reading about the foibles, flailings, and failings that she has witnessed in her work helped me understand why it is so important to get the training part right and helped me understand some of what needs to be considered as new products are developed.

Dr. Shane starts out comparing machine learning and rule-based AI systems, which are two very different types of AI systems. Briefly, a rule-based system uses rules written by human programmers as it works with data to make decisions. By contrast, a machine learning algorithm3 is not given rules. Instead, humans pick an algorithm, give a goal (maybe to make a prediction or decision), give example data that helps the algorithm learn4, and then the algorithm has to figure out how to achieve that goal. Depending on the algorithm, they will discover their own rules (for some this means adjusting weights on connections between what is input and what they output). From the example data given to the algorithm, it “learns” or rather the algorithm improves what it produces through its experience with that data. It’s important to note that the algorithm is doing the work to improve and not a human programmer. In the book, Dr. Shane explains that after she sets up the algorithm with a goal and gives it training data she goes to get coffee and lets it work.

Strengths and Weaknesses

There are strengths and weaknesses in the machine learning approach. A strength is that as the algorithm tries to reach its goal, it can detect relationships and features of details that the programmer may not have thought would be important, or that the programmer may not even have been aware of. This can either be good or bad.

One way it can be good or positive is that sometimes an AI tries a novel solution because it isn’t bogged down with knowledge constraints of rules in the world. However, not knowing about constraints in the world can simultaneously be bad and lead to impossible ideas. For example, in the book, Dr. Shane discusses how in simulated worlds, an AI will try things that won’t work in our world because it doesn’t understand the laws of physics. To help the AI, a human programmer needs to specify what is impossible or not. Also, an AI will take shortcuts that may lead to the goal, but may not be fair. One time, an AI created a solution that took advantage of a situation. While it was playing a game, the AI discovered there wasn’t enough RAM in the computer of its opponent for a specific move. The AI would make that move and cause the other computer to run out of RAM and then crash. The AI would then win every time. Dr. Shane discusses many other instances where an AI exploits a weakness to look like it’s smart.

In addition, one other problem we have learned from machine learning work, is that it highlights and exacerbates problems that it learns from training data. For example, much training data comes from the internet. Much of the data on the internet is full of bias. When biased data are used to train an AI, the biases and problems in the data become what guide the AI toward its goal. Because of this, our biases, found on the internet, become perpetuated in the decisions the machine learning algorithms make. (Read about some of the unfair and biased decisions that have occurred when AI was used to make decisions about defendants in the justice system.)

Bias

People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)

In the book, Dr. Shane says, machine learning occurs in the AI algorithms by “copying humans” — the algorithms don’t find the “best solution” or an unbiased one, they are seeking a way to do “what the humans would have done” (p 24) in the past because of the data they use for training. What do you think would happen if an AI were screening job candidates based on how companies typically hired in the past? (Spoiler alert: hiring practices do not become less discriminatory and the algorithms perpetuate and extend biased hiring.)

A related problem comes about because machine learning AIs make their own rules. These rules are not explicitly stated in some machine learning algorithms so we (humans aka the creators and the users) don’t always know what an AI is doing. There are calls for machine learning to write out the rules it creates so that humans can understand them, but this is a very hard problem and it won’t be easy to fix. (In addition, some algorithms are proprietary and companies won’t let us know what is happening.)

Integrating AIs into our lives

It feels necessary to know how a machine is making decisions when it is tasked with making decisions about people’s lives (e.g., prison release, hiring, and job performance). We should not blindly trust how AIs make decisions. AIs have no idea of the consequences of its decisions. We can still use them to help us with our work, but we should be very cautious about the types of problems we automate. We also need to ensure that the AI makes it clear what they are doing so that humans can review the automation, how humans can override decisions, and the consequences of an incorrect decision by an AI. Dr. Shane reminds us that an “AI can’t be bribed but it also can’t raise moral objections to anything it’s asked to do” (p. 4).

In addition, we need to ensure the data we use for training are as representative as possible to avoid bias, make sure that the system can’t take shortcuts to meet its goal, and we need to make sure the systems work with a lot of different types of populations (e.g., gender, racial, people with learning differences). AIso, an AI is not as smart as a human, in fact, Dr. Shane shares that most AI systems using machine learning (in 2019) have the approximate brainpower of a worm. Machine learning can help us automate tasks, but we still have a lot of work to do to ensure that AIs don’t harm or damage people.

What are your thoughts or questions on machine learning or other types of AI in education? Tweet to @CIRCLEducators and be part of the conversation.

Thank you to James Lester for reviewing this post. We appreciate your work in AI and your work to bring educators and researchers together on this topic.

See a recent TED Talk by author Janelle Shane.


Notes:

  1. Read the book to find out what the title means!
  2. Machine learning is one of several AI approaches.
  3. Machine Learning is a general term that also includes neural networks and the more specialized neural network class of Deep Learning. Note also, a famous class of ML algorithms that use rules are decision-tree algorithms.
  4. Some algorithms “learn” with labeled examples and some without, but that’s a discussion beyond the scope of this post.

Supporting Computationally Rich Communication During Remote Learning: Lessons Learned

By Colin Hennessy Elliott & the SchoolWide Labs Team

This post was written by a member of the SchoolWide Labs research team, about their experience during the pandemic and what they learned from middle school science and STEM teachers as part of a larger Research-Practice Partnership between a university and a large school district in the United States. The post was reviewed by practicing Educator CIRCLS members. The purpose of the blog is to help open the door between the worlds of research and practice a bit wider so that we can see the differing perspectives and start a dialogue. We are always looking for more practitioners and researchers who want to join us in this work.

The COVID-19 pandemic pushed many school communities online last school year in the US. Teachers were charged with accommodating so many needs while holding levels of care and compassion for students and their families. As a multi-year research project aimed at supporting teachers in integrating computational thinking in science and STEM learning, we worked with renewed senses of compassion, creativity, and struggle. We witnessed how students and teachers innovatively developed computationally rich communication using the technologies from our project while teaching and learning remotely. Below we share a few moments from the 2020-21 school year that have helped us learn what it takes to engage middle school students in computational practices (i.e. collaborating on programming a physical system, interpreting data from a sensor) that are personally relevant and community-based. These moments offer lessons on how collaboration and communication are key to learning, regardless of whether the learning takes place in person or remotely.

Who we are

The SchoolWide Labs research team, housed at the University of Colorado Boulder with collaborators at Utah State University, has partnered with Denver Public Schools (DPS) for over five years. We work with middle school science and STEM teachers to co-develop models for teacher learning that support the integration of computational thinking into science and STEM classrooms. The team selected, assembled, and refined a programmable sensor technology with input from teachers on what would be feasible in their classrooms and in collaboration with a local electronics retailer (SparkFun Electronics). This collaboration focused particularly on programmable sensors because they offer opportunities for students to develop deeper relationships with scientific data, as producers rather than just data collectors.1 This aligns with modern scientific practice where scientists often tinker with computational tools to produce the data they need to answer specific questions.

The Data Sensor Hub (DaSH) is a low cost physical computing system used in the curriculum and professional learning workshops developed by the SchoolWide Labs team. Ensuring the DaSH would be low cost was a priority of the team as an issue of access and equity. The DaSH consists of the BBC Micro:bit, a connection expander called the gator:bit, and an array of sensors that can be attached to the micro:bit and gator:bit with alligator clips (see Figure 1). Students can easily assemble the DaSH themselves to experience the physical connections and hard wiring. Students and teachers can write programs for the DaSH using MakeCode, a block-based programming environment that can be accessed via a web browser, making it easy to use with various computer setups. For students with more programming experience, MakeCode has the option to use python or javascript to program the micro:bits.

Image is a diagram that shows the micro:bit, (smaller looking electronics component with a 6 by 6 array of small LEDs in the middle) inserted into the Gator:bit (larger electronics board with five LED lights in the middle) with three sensors to the left and three wires between the gator:bit and sensors.Image shows a hand holding the micro:bit inserted into the Gator:bit with alligator-clip wires connecting the gator:bit to the microphone sensor.

Figure 1.The Data Sensor Hub (DaSH). The picture on the left depicts the components of the DaSH used with the Sensor Immersion Unit including the micro:bit, Gator:bit and three sensors (top to bottom: soil moisture sensor, microphone sensor, environmental sensor). The picture on the right shows a teacher and student interacting with the DaSH set up just for the microphone sensor.

Before the COVID-19 pandemic, our research team co-designed curricular units with teachers interested in using the DaSH to engage middle school students in scientific inquiry. Currently there are four units available on our website, three that use the DaSH and one that uses a 3-D printer. The Sensor Immersion Unit – the only unit teachers implemented remotely in the 2020-21 school year – has students explore the DaSH in use via a classroom data display, learn basic programming, and create their own displays that collect environmental data (sound, temperature, carbon dioxide levels, or soil moisture) to address a question of their choice. For example, one group of students decided to investigate climate change by measuring atmospheric carbon dioxide levels in their neighborhoods and exploring the impact of plants and trees. The goal is for students to develop ownership of the DaSH as a data collection tool by wiring the hardware and programming the software. In the process, they engage in computational thinking and computationally rich communication when they discuss their use of the DaSH with peers and the teacher.

In the 2020-21 school year most middle schools in Denver Public Schools were remote. Several STEM teachers, with more curricular flexibility, decided to provide DaSHs to students who wanted the responsibility of having them for a period of time. Having the DaSHs in students’ homes offered opportunities to make the barriers between home and school less visible, as students conducted place-based investigations and emergently took on the role of data producers. For example, some students shared temperature data and carbon dioxide levels in and around their homes with the class. In these moments, students emergently took on the role of data producers. Below, we share two examples from observing student and teacher interactions in virtual mediums which helped our research team learn about what is possible using the DaSH. We also developed new supports to help teachers facilitate extended student collaboration and communication when using the DaSH.

Lesson Learned 1: Increasing student collaboration in virtual settings

One middle school STEM teacher, Lauren (a pseudonym), had the opportunity to teach different cohorts of eighth graders in the first two quarters of the 2020-21 school year. A new SchoolWide Labs participant, she was enthusiastic about implementing the Sensor Immersion Unit with her first cohort in the first quarter. She navigated the logistical challenges of getting DaSHs to over half her students along with the pedagogical challenges of adapting the curriculum to a remote setting. After her first implementation, she shared that she was disappointed that her students rarely collaborated or shared their thinking with each other when they were online. We heard from other teachers that they had similar struggles. Before Lauren’s second implementation, we facilitated several professional learning sessions with the aim of supporting teachers to elicit more student collaboration in remote settings. Through our work together, we identified the importance of establishing collaboration norms for students, offering continued opportunities to meet in small groups virtually, and modeling how to make their work visible to each other. In Lauren’s second implementation with new students during next quarter, she intentionally discussed norms and roles for group work in “breakout rooms,” or separate video calls for each group (her school was not using a software that had the breakout room functionality). One of the resulting virtual rooms with three eighth graders during the Sensor Immersion Unit was especially encouraging for both Lauren and our research team. Without their cameras on at any point, the three boys shared their screens (swapping depending on who needed help or wanted to show the others) and coordinated their developing programs (on different screens) in relation to the DaSHs that two students had at home. Their collaboration included checking in to make sure everyone was ready to move on (“Everyone ok?”) and the opportunity to ask for further explanation from others at any point (“hold on, why does my [DaSH]…”). With their visual joint attention on the shared screen, the three successfully navigated an early program challenge using their understanding of the programming environment (MakeCode) and hints embedded in it (developed by the research team).

Lesson Learned 2: Adapting debugging practices to a virtual environment

Many science and STEM teachers new to our projects have struggled with finding confidence in supporting students as they learn to program the DaSH. They specifically worry about knowing how to support students as they debug the systems, which includes finding and resolving potential and existing issues in computer code, hardware (including wiring), or their communication. This worry was further magnified when learning had to happen remotely, even with some students having the physical DaSH systems at home. Common issues teachers encountered in student’s setup were consistent with the bugs that we have identified over the course of the project including: 1) code and wiring do not correspond, 2) problems in the students’ code, 3) the program is not properly downloaded onto the micro:bit, and more.2

Being unable to easily see students’ physical equipment and provide hands-on support made some teachers wary of even attempting to use the SchoolWide Labs curriculum in a remote environment. However, those teachers who were willing to do so made intriguing adaptations to how they supported students in identifying and addressing bugs. These adaptations included: 1) meeting one on one briefly to ask students questions about their progress and asking them to hold up their systems to the camera for a hardware check, 2) holding debugging-specific office hour times during and outside of class time, and 3) having students send their code to teachers to review as formative assessments and debugging checks. Although debugging required more time, patience, and creativity from teachers and students, these activities were generally successful in making the DaSHs work and helping students become more adept users.

Future Directions

As teachers and students have gone back to attending school face-to-face, the lessons learned during remote instruction continue to inform our work and inspire us as a SchoolWide Labs community. As these two examples show, the ingenuity of teachers and students during a tough shift to virtual learning led to new forms of computational thinking, communication, and collaboration. It became more clear that there was a critical need for at least some students to have the physical systems at home to deeply engage in the process of being data producers, which is indicative of the curriculum being so material rich. Yet, simply having the the DaSH in hand did not ensure that students would participate in the kinds of communication important for their engagement in and learning of the targeted science and computational thinking practices. While we continue to explore the complexity of teacher and student interactions with the DaSH, this past summer and fall we have been working with participating teachers (new and returning from last year) to develop a more specific set of norms, which include small group communication norms and roles. Additionally, we have begun to consider other strategies that may support students to learn programming and debugging skills, such as the use of student-created flowcharts to represent their view of the computational decisions of the DaSH. The shift to virtual learning, and now back to face-to-face instruction, has required us to more deeply reflect on the professional learning that best supports teachers in using the DaSH and accompanying curriculum in a variety of instructional settings, with both anticipated and unanticipated constraints. We welcome the opportunity to continue learning with and from our colleagues who are similarly engaged in this type of highly challenging but extremely rewarding endeavor to promote computationally rich and discourse-centered classrooms.

More Information

More on SchoolWide Labs work:

  • Visit our website.
  • Gendreau Chakarov, A., Biddy, Q., Hennessy Elliott, C., & Recker, M. (2021). The Data Sensor Hub (DaSH): A Physical Computing System to Support Middle School Inquiry Science Instruction. Sensors, 21(18), 6243. https://doi.org/10.3390/s21186243
  • Biddy, Q., Chakarov, A. G., Bush, J., Hennessy Elliott, C., Jacobs, J., Recker, M., Sumner, T., & Penuel, W. (2021). A Professional Development Model to Integrate Computational Thinking Into Middle School Science Through Codesigned Storylines. Contemporary Issues in Technology and Teacher Education, 21(1), 53–96.
  • Gendreau Chakarov, A., Recker, M., Jacobs, J., Van Horne, K., & Sumner, T. (2019). Designing a Middle School Science Curriculum that Integrates Computational Thinking and Sensor Technology. Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 818–824. https://doi.org/10.1145/3287324.3287476

Important references for our work

Hardy, L., Dixon, C., & Hsi, S. (2020). From Data Collectors to Data Producers: Shifting Students’ Relationship to Data. Journal of the Learning Sciences, 29(1), 104–126. https://doi.org/10.1080/10508406.2019.1678164

Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining Computational Thinking for Mathematics and Science Classrooms. Journal of Science Education and Technology, 25(1), 127–147. https://doi.org/10.1007/s10956-015-9581-5
______________________
1 See Hardy, Dixon, and Hsi(2020) for more information about data producers and collectors.
2 Table 2 in Gendreau Chakarov et al. (2021) has a full list of common DaSH bugs students have encountered across the project.