Category Archives: Written by: Amar Abbott

red heart icon

Algorithms, Educational Data, and EdTech: Anticipating Consequences for Students

By Pati Ruiz and Amar Abbott

The 2020-2021 school year is underway in the U.S. and for many students, that means using edtech tools in a fully online or blended learning environment. As educators, it is our responsibility to consider how students are using edtech tools and what the unanticipated consequences of using these tools might be. Before introducing edtech tools to students, administrators should spend time considering a range of tools to meet the needs of their students and teachers. In a recent blog post, Mary Beth Hertz described the opportunities for anti-racist work in the consideration and selection of the tools students use for learning. Hertz identified a series of questions educators can ask about the tools they will adopt to make sure those tools are serving the best interest of all of their students. Two of the questions in Hertz’s list ask us to consider data and algorithms. In this post, we focus on these two questions and Hertz’s call to “pause and reflect and raise our expectations for the edtech companies with which we work while also thinking critically about how we leverage technology in the classroom as it relates to our students of color.” The two questions are:

  1. How does the company handle student data? and,
  2. Has the company tested its algorithms or other automated processes for racial biases?

To help us better understand the issues around these two questions, we will discuss the work of two researchers: Dr. Safiya Noble and Dr. Ruha Benjamin. This post expands on our previous post about Dr. Noble’s keynote address — The Problems and Perils of Harnessing Big Data for Equity & Justice — and her book, Algorithms of Oppression: How Search Engines Reinforce Racism. Here, we also introduce the work of Dr. Ruha Benjamin, and specifically the ideas described in her recent book Race After Technology: Abolitionist Tools for the New Jim Code.

Student Data

In order to understand how companies handle student data, we need to first consider the concept of data. Data are characteristics or information that are collected in a manner capable of being communicated or manipulated by some process (Wiktionary, 2020). In Dr. Noble’s keynote speech, she discusses the social construction of data and the importance of paying attention to the assumptions that are made about the characterization of data that are being collected. In her book, Dr. Noble shows how Google’s search engine perpetuates harmful stereotypes about Black women and girls in particular. Dr. Benjamin describes the data justice issues we are dealing with today as ones that come from a long history of systemic injustice in which those in power have used data to disenfranchise Black people. In her chapter titled Retooling Solidarity, Reimagining Justice, Dr. Benjamin (2019) encourages us to “question, which humans are prioritized in the process” (p. 174) of design and data collection. With edtech tools, the humans who are prioritized in the process are teachers and administrators, they are the “clients.” We need to consider and prioritize the affected population, students.

 

When it comes to the collection and use of educational data and interventions for education, there is much work to be done to counteract coded inequities of the “techno status quo.” In her keynote, Dr. Noble offered a list of suggestions for interventions including:

 

  1. Resist making issues of justice and ethics an afterthought or additive
  2. Protect vulnerable people (students) from surveillance and data profiling

 

Center Issues of Justice and Ethics

As described by Tawana Petty in the recent Wired article Defending Black Lives Means Banning Facial Recognition, Black communities want to be seen and not watched. The author writes:

“Simply increasing lighting in public spaces has been proven to increase safety for a much lower cost, without racial bias, and without jeopardizing the liberties of residents.”

What is the equivalent of increasing light in education spaces? What steps are being taken to protect students from surveillance and data profiling? How are teachers and students trained on the digital tools they are being asked to use? How are companies asked to be responsible about the kinds of data they collect?

Schools have legal mandates meant to protect students’ rights, such as the Family Educational Rights and Privacy Act (FERPA) in the U.S. and other policies that protect student confidentiality regarding medical and student educational records. Although a lot of forethought has gone into protecting students’ confidentiality, has the same critical foresight implemented when purchasing hardware and software?

 

In Dr. Noble’s keynote speech, she described the tracking of students on some university campuses through the digital devices they connect to campus Internet or services (like a Library or Learning Management System). The reasoning behind tracking students is to allocate university resources effectively to help the student be successful. However, in this article, Drew Harwell writes about the complex ethical issues regarding students being digitally tracked and an institutions’ obligation to keep students’ data private. So, before software or hardware is used or purchased, privacy and ethics issues must be discussed and addressed. Special energy needs to be dedicated to uncovering any potential “unanticipated” consequences of the technologies as well. After all, without the proper vetting, a bad decision could harm students.

Protect Vulnerable Students

Protecting vulnerable students includes being able to answer Hertz’s question: “Has the company tested its algorithms or other automated processes for racial biases?” But, even when the company has tested its algorithms and automated processes, there is often still work to be done because “unanticipated” results continue to happen. A Twitter spokesperson Liz Kelley recently posted a tweet saying: “thanks to everyone who raised this. we tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do.”

She was responding to the experiment shown below where user @bascule posted: “Trying a horrible experiment…Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?”

Twitter’s machine learning algorithm chose to center the white face instead of the black face when presented with where the white profile picture was shown on top, white space in between, followed by the black profile picture. But it did the same when the black profile picture was shown on top, white space in between, followed by the white profile picture.

A horrible twitter experiment with face recognition. The algorithm selects the white face regardless of placement

As we can see, the selection and use of tools for learning is complicated and requires balancing many factors. As CIRCL Educators we hope to provide some guidance to ensure the safety of students, families, and their teachers. Additionally, we are working to demystify data, algorithms, and AI for educators and their students. This work is similar to the work being done by public interest technologists in the communities and organizations described by both Noble and Benjamin. We don’t have all of the answers, but these topics are ones that we will continue to discuss and write about. Please share your thoughts with us by tweeting @CIRCLEducators.

 

References

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press.

data. (2020, August 12). Wiktionary, The Free Dictionary. Retrieved 15:31, August 26, 2020 from https://en.wiktionary.org/w/index.php?title=data&oldid=60057733.

Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Five CIRCL Educators stand next to a Cyberlearning 2019 banner

Harnessing Educational Data: Discussing Dr. Safiya Noble’s Keynote from Cyberlearning 2019

By Pati Ruiz, Sarah Hampton, Judi Fusco, Amar Abbott, and Angie Kalthoff

In October 2019 the CIRCL Educators gathered in Alexandria, Virginia for Cyberlearning 2019: Exploring Contradictions in Achieving Equitable Futures (CL19). For many of us on the CIRCL Educators’ team it was the first opportunity for us to meet in person after working collaboratively online for years. In addition, CL19 provided us with opportunities to explore learning in the context of working with technology and meet with researchers with diverse expertise and perspectives. We explored the tensions that arise as research teams expand the boundaries of learning, and explored how cyberlearning research might be applied in practice.

One of the topics, we thought a lot about at CL19, is algorithms. We had the opportunity to hear from keynote speaker Safiya Noble, an Associate Professor at UCLA, and author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press). In her Keynote, The Problems and Perils of Harnessing Big Data for Equity & Justice, Dr. Noble described the disturbing findings she uncovered when she started investigating algorithms related to search. She was not satisfied with the answer that the way algorithms categorized people, particularly girls of color, was what “the public” wanted. She dug in deeper and what she said really made us think.

This keynote is related to some of the conversations we’re having about Artificial Intelligence (AI), so we decided to re-watch the recorded version and discuss the implications of harnessing Big Data for students, teachers, schools, and districts. Big Data is crucial in much work related to AI. Algorithms are crucial. We bring this into our series on AI because even though math and numbers seem like they are not culturally-biased, there are ways that they are and can be used to promote discrimination. In this post, we don’t summarize the keynote, but we tell you what really got us thinking. We encourage you to watch it too.

Besides discussing algorithms for search, Dr. Noble also discusses implications of technology, data, and algorithms in the classroom. For example, Dr. Noble shared how she breaks down how a Learning Management System works for her students so that they know how the technology they are using can inform their professors of how often and how long they log into the system (among other things). She said they were often surprised that their teachers could learn these things. She went on to say:

“These are the kinds of things that are not transparent, even to the students that many of us are working with and care about so deeply. “

Another idea that particularly resonated with us, as teachers, from the talk is the social value of forgetting. Sometimes there is value in digitally preserving data, but sometimes there is more value in NOT documenting it.

“These are the kinds of things when we think about, what does it mean to just collect everything? Jean–François Blanchette writes about the social value of forgetting. There’s a reason why we forget, and it’s why juvenile records, for example, are sealed and don’t follow you into your future so you can have a chance at a future. What happens when we collect, when we use these new models that we’re developing, especially in educational contexts? I shudder to think that my 18-year-old self and the nonsense papers (quite frankly who’s writing a good paper when they’re 18) would follow me into my career? The private relationship of feedback and engagement that I’m trying to have with the faculty that taught me over the course of my career or have taught you over the course of your career, the experimentation with ideas that you can only do in that type of exchange between you and your instructor, the person you’re learning from, that being digitized and put into a system, a system that in turn could be commercialized and sold at some point, and then being data mineable. These are the kinds of real projects that are happening right now.”

We are now thinking a lot about how to help students and teachers better understand how our digital technology tools work, how we should  balance the cost of using technology to help learners with the potential problem of hyper-datafication of saving everything and never letting a learner move past some of their history.

As we think through this tension, and other topics in the keynote, some of the questions that came up for us include:

  • What information is being collected from our students and their families/homes and why? Where does the information go?
  • Who is creating the app that is collecting the data? Are they connected to other programs/companies that can benefit from the data?
  •  What guidelines for privacy does the software company follow? FERPA/COPPA? Do there need to be more or updated standards? What policies aren’t yet in place that we need to protect students?
  • What kinds of data is being digitally documented that could still be available years after a student has graduated? How could that impact them in job searches? Or, what happens when our students, who have documented their whole lives digitally, want to run for public office?
  • There are well-documented protocols for destroying students’ physical work, so what documented protocols are in place for their digital work?
  • Are school devices (e.g., Chromebooks or iPads) that contain student sensitive data being shared? Are all devices wiped between school years?
    • Students clean out their desks and lockers at the end of the school year, should we be teaching them to clean out their devices?
    • Do students have an alternative to using software or devices if they or their families have privacy concerns? Should they?
  • Is someone in your district (or school) accountable for privacy evaluation, software selection, and responsible use?
    • How are teachers being taught what to look for and evaluate in software?

In future posts, we’ll cover some more of what Dr. Noble suggested based on her work including the following points she made:

  1. (Re)consider the effect of hyper-datafication
  2. Resist making issues of justice and ethics an afterthought or additive
  3. Protect vulnerable people (students) from surveillance and data profiling
  4. Fund critical digital media research, literacy programs, and education
  5. Curate the indexable web, create multiple paths to knowledge
  6. Reduce technology over-development and its impact on people and the planet
  7. Never give up on the right things for the planet and the people

Dr. Noble on stage at the Cyberlearning 2020 meeting.

Finally, some of us have already picked up a copy of Algorithms of Oppression: How Search Engines Reinforce Racism and if you read it, we would love to hear your thoughts about it. Tweet @CIRCLEducators. Also, let us know if you have questions or thoughts about the keynote and/or algorithms.

Road with Trees

Community: A Reflective Journey

by Amar Abbott

Recently, I was asked to be a committee member to help plan this year’s Cyberlearning Convening. I was honored and a little surprised as I felt I had not contributed to the community since 2016 when I was a Cyberlearning buddy. As a “buddy” I met professionals in the field, participated in conference discussions, and provided project approval in a mock competition. I also met the Cyberlearning community and gave feedback from my perspective as an assistive technology expert regarding various projects. This experience helped me become a contributor to the educational corner blog part of CIRCL. At the time, I did not realize how I was becoming part of the Cyberlearning community. This is what researchers call legitimate peripheral participation (LPP); through LPP a person has the potential to become a part of a community (Lave & Wenger 1991).

Looking back to when I attended my first Cyberlearning meeting in 2016, I was genuinely excited to participate in it as a doctoral student and a Cyberlearning buddy. I was going to meet some of the leading researchers in the field of learning sciences, and I had a seat at the table to interact with them. (Needless to say, I was in awe of this opportunity.) I got to hear great keynote speakers such as Nicole Pinkerton discuss Using Cyberlearning to Create Equitable Learning Opportunities at City-Scale and Jim Sheldon’s Accelerating Innovation in Learning and Teaching: Creating and Leveraging the Policy Context. (Note, links go to videos of the talks.)

Dr. Pinkerton and Mr. Sheldon’s presentations moved me and made me realize how much of an impact the Cyberlearning community can have in changing lives. Even though I did not fully realize everything that was said and done at the meeting in 2016, I knew that merely participating was giving me the opportunity to learn from the experienced researchers and PIs in the room. I was excited to contribute to the conversations that involved my knowledge area of learning sciences, disability studies as well as issues of access and accessibility. I realized how much I needed to learn about this community so that I could be a knowledgeable contributor in the field; I was on the precipice of a great journey. Unfortunately, after the conference, I had too many things to do to further my membership in this community. I finished my doctorate in Learning Technologies and resumed my work as a college faculty member, believing I had lost my opportunity to further contribute to the Cyberlearning community.

I was thrilled by the invitation back to the Cyberlearning community in 2019. It allowed me to experience an epiphany when I heard the first keynote speaker and realized that I had grown over the past three years. When one of the keynote speakers, Dr. Safiya Noble, presented The Problems and Perils of Harnessing Big Data for Equity & Justice, I felt a connection to it, I understood every word she said, and instantly thought of ways to apply what I had learned from her in my work at my college and to help my students. Furthermore, when Angela Booker, the final keynote speaker, gave her presentation on Ethical Power Relations as an Act of Design, she referenced Lave and Wenger at a high level; I realized that I not only understood it but that I regularly observe the phenomenon she was discussing in my instructional practice. For example, Dr. Booker mentioned, persistent marginalization. Traditionally, persistent marginalization is a group or community that is on the fringe of the societal or cultural norms, such as people who study Wicca as their chosen religion. When I heard Dr. Booker speak of persistent marginalization it resonated with me because of the personal implications of the statement; I define it as a community member who could be on the fringe of the community and not be entirely accepted. As an academic with a learning disability, I often felt persistently marginalized, as if I did not belong in the community. I had felt this way about my role in the Cyberlearning community and academia as well.

In hindsight, now, this was a ridiculous notion because I have been a part of both the Cyberlearning and academic communities since grad school! In reality, I had been a contributor to the fields by implementing learning science theories and passing that knowledge along to my students and colleagues. Over time, I have become a resource for others at my institution; people look to me for help with accessibility, learning sciences, and instructional design. With this realization, my perspective about participating in the 2019 conference changed profoundly. This conference showed me how much I have transformed in the past three years regarding the learning sciences and how I contribute to this community. The conversations that I had with my colleagues at the conference have cemented my membership in the Cyberlearning community because I know that I am a valued member and a daily contributor to Cyberlearning when I share with my students and colleagues what has been generously passed on to me by all my colleagues in the Cyberlearning community.

On a personal note, I want to thank Dr. Judi Fusco for bringing me into the Cyberlearning community.