Details

Hack Education

Website
Visit Website
Description "The History of the Future of Education Technology"

Critical look at edutech political and industry players. US and HE focused but of interest to others.
Rating
0/5 based on 0 votes.
Feed Data
  • Behaviorism Won by on Thu, 08 Oct 2020 08:00:00 +0000:
    [i]I have volunteered to be a guest speaker in classes this term. Yesterday, I talked to the students in Roxana Marachi's educational psychology class at San Jose State.[/i]



    Thank you very much for inviting me to speak to your class. I will confess at the outset: I've never taken a class in educational psychology before. Never taken any psychology course, for that matter. My academic background, however, is in literature where one does spend quite a bit of time talking about Sigmund Freud. And I wrote my master's thesis in Folklore on political pranks, drawing in part on Freud's book [i]Jokes and Their Relation to the Unconscious[/i]. I don't think it's a stretch to argue that Freud is probably one of the most influential theorists of the twentieth century.



    A decade ago, I might have said [i]the[/i] most influential. But I've been spending the last few years deeply immersed in the work of another psychologist's work, B. F. Skinner. I've read all his books and several books about him; I spent a week at the archives at Harvard, pouring through his letters. Perhaps it's colored my assessment — I'm like that kid in [i]The Sixth Sense[/i] except instead of dead people, I see behaviorism everywhere. Okay sure, Skinner's cultural impact might not be as widely recognized as Freud's, but I don't think his importance can be dismissed. He was one of the best known public scholars of his time, appearing on TV shows and in popular magazines, not just at academic conferences and in academic journals. B. F. Skinner was a household name.



    It's too easy, I think, to say that Freud and Skinner's ideas are no longer relevant — that psychology has advanced so far in the past century or so and that their theories have been proven wrong. I don't think that's necessarily true. One of the stories that often gets told is that after the linguist Noam Chomsky published two particularly brutal reviews of Skinner's books — a review of [i]Verbal Behavior[/i] in 1959 and a review of [i]Beyond Freedom and Dignity[/i] in 1971 — everyone seemed to agree behaviorism was wrong, and it was tossed aside for cognitive science. Certainly cognitive science did become more widely adopted within psychology departments starting in the 1960s and 1970s, but the reasons the field turned away from behaviorism were much more complicated than a couple of book reviews. And outside of academic circles, there were other factors too that diminished Skinner's popularity. The film [i]A Clockwork Orange[/i], for example, probably did much more to shape public opinion about behavior modification than anything else. In 1974, the Senate Judiciary Committee issued a report on the use of behavior modification as there was growing concern about the ways in which these were being used in federally-funded programs, including prisons and schools. In 1972, the public learned about the Tuskegee Experiment, a blatantly racist and decades-long study of the effects of untreated syphilis on African American men. People became quite wary of the use of humans in research experiments — medical and psychological, and the National Research Act was signed by President Nixon, establishing institutional review boards that would examine the ethical implications of research.



    But behaviorism did not go away. And I'd argue that didn't go away because of the [i]technologies[/i] of behavior that Skinner (and his students) promulgated.



    There's a passage that I like to repeat from an article by historian of education Ellen Condliffe Lagemann:




    I have often argued to students, only in part to be perverse, that one cannot understand the history of education in the United States during the twentieth century unless one realizes that Edward L. Thorndike won and John Dewey lost.




    I'm guessing you know who these two men are, but I'll explain nonetheless: Edward L. Thorndike was an educational psychology professor at Columbia University who developed his theory of learning based on his research on animal behavior – perhaps you've heard of his idea of the "learning curve," the time it took for animals to escape his puzzle box after multiple tries. And John Dewey was a philosopher whose work at the University of Chicago Lab School was deeply connected with that of other social reformers in Chicago – Jane Addams and Hull House, for example. Dewey was committed to educational inquiry as part of democratic practices of community; Thorndike's work, on the other hand, happened largely in the lab but helped to stimulate the growing science and business of surveying and measuring and testing students in the early twentieth century. You can think of the victory that Condliffe Lagemann speaks of, in part, as the triumph of multiple choice testing over project-based inquiry.



    Thorndike won, and Dewey lost. You can't understand the history of education unless you realize this. I don't think you can understand the history of education [i]technology[/i] without realizing this either. And I'd go one step further: you cannot understand the history of education technology in the United States during the twentieth century – and on into the twenty-first – unless you realize that Seymour Papert lost and B. F. Skinner won.



    I imagine you'll touch on Papert's work in this course too. But a quick introduction nonetheless: he was a mathematician and computer scientist and a student of Jean Piaget — another key figure in educational psychology. Papert was one of the founders of constructionism, which builds on Piaget's theories of constructivism — that is, learning occurs through the reconstruction of knowledge rather than a transmission of knowledge. In constructionism, learning is most effective when the learner constructs something meaningful.



    Skinner won; Papert lost. Thorndike won; Dewey lost. [i]Behaviorism[/i] won.



    It seems to really bother folks when I say this. It's not aspirational enough or something. Or it implies maybe that we've surrendered. Folks will point to things like maker-spaces to argue that progressive education is thriving. But I maintain, even in the face of all the learn-to-code brouhaha, that multiple choice tests have triumphed over democratically-oriented inquiry. Indeed, when we hear technologists champion "personalized learning," it's far more likely that what they envision draws on Skinner's ideas, not Dewey's.



    In education technology circles, Skinner is perhaps best known for his work on teaching machines, an idea he came up with in 1953, when he visited his daughter's fourth grade classroom and observed the teacher and students with dismay. The students were seated at their desks, working on arithmetic problems written on the blackboard as the teacher walked up and down the rows of desks, looking at the students' work, pointing out the mistakes that she noticed. Some students finished the work quickly, Skinner reported, and squirmed in their seats with impatience waiting for the next set of instructions. Other students squirmed with frustration as they struggled to finish the assignment at all. Eventually the lesson was over; the work was collected so the teacher could take the papers home, grade them, and return them to the class the following day.



    "I suddenly realized that something must be done," Skinner later wrote in his autobiography. This classroom practice violated two key principles of his behaviorist theory of learning. Students were not being told immediately whether they had an answer right or wrong. A graded paper returned a day later failed to offer the type of positive behavioral reinforcement that Skinner believed necessary for learning. Furthermore, the students were all forced to proceed at the same pace through the lesson, regardless of their ability or understanding. This method of classroom instruction also provided the wrong sort of reinforcement – negative reinforcement, Skinner argued, penalizing the students who could move more quickly as well as those who needed to move more slowly through the materials.



    So Skinner built a prototype of a mechanical device that he believed would solve these problems – and solve them not only for a whole classroom but ideally for the entire education system. His teaching machine, he argued, would enable a student to move through exercises that were perfectly suited to her level of knowledge and skill, assessing her understanding of each new concept, and giving immediate positive feedback and encouragement along the way. He patented several versions of the device, and along with many other competitors, sought to capitalize what had become a popular subfield of educational psychology in the 1950s and 1960s: programmed instruction.



    The teaching machine wasn't the first time that B. F. Skinner made headlines – and he certainly make a lot of headlines for the invention, in part because the press linked his ideas about teaching children, as Skinner did himself no doubt, to his research on training pigeons. "Can People Be Taught Like Pigeons?" [i]Fortune[/i] magazine asked in 1960 in a profile on Skinner and his work. Skinner's work training a rat named Pliny had led to a story in [i]Life[/i] magazine in 1937, and in 1951, there were a flurry of stories about his work on pigeons. (The headlines amuse me to no end, as Skinner was a professor at Harvard by then, and many of them say things like "smart pigeons attend Harvard" and "Harvard Pigeons are Superior Birds Too.")



    Like Edward Thorndike, Skinner worked in his laboratory with animals (at first rats, then briefly squirrels, and then most famously pigeons) in order to develop techniques to control behavior. Using a system of reinforcements – food, mostly – Skinner was able to condition his lab animals to perform certain tasks. Pliny the Rat "works a slot machine for living," as Life described the rat's manipulation of a marble; the pigeons could play piano and ping pong and ostensibly even guide a missile towards a target.



    In graduate school, Skinner had designed an "operant conditioning chamber" for training animals that came to be known as the "Skinner Box." The chamber typically contained some sort of mechanism for the animal to operate – a plate for a pigeon to peck (click!), for example – that would result in a chute releasing a pellet of food.



    It is perhaps unfortunate then that when Skinner wrote an article for [i]Ladies Home Journal[/i] in 1945, describing a temperature-controlled, fully-enclosed crib he'd invented for he and his wife's second child, that the magazine ran it with the title "Baby in a Box." (The title Skinner had given his piece: "Baby Care Can Be Modernized.")



    Skinner's wife had complained to him about the toll that all the chores associated with a newborn had taken with their first child, and as he wrote in his article, "I felt that it was time to apply a little labor-saving invention and design to the problems of the nursery." Skinner's "air crib" (as it eventually came to be called) allowed the baby to go without clothing, save the diaper, and without blankets; and except for feeding and diaper-changing and playtime, the baby was kept in the crib all the time. Skinner argued that by controlling the environment – by adjusting the temperature, by making the crib sound-proof and germ-free – the baby was happier and healthier. And the workload on the mother was lessened – "It takes about one and one-half hours each day to feed, change, and otherwise care for the baby," he wrote. "This includes everything except washing diapers and preparing formula. We are not interested in reducing the time any further. As a baby grows older, it needs a certain amount of social stimulation. And after all, when unnecessary chores have been eliminated, taking care of a baby is fun."



    As you can probably imagine, responses to Skinner's article in [i]Ladies Home Journal[/i] fell largely into two camps, and there are many, many letters in Skinner's archives at Harvard from magazine readers. There were those who thought Skinner's idea for the "baby in a box" bordered on child abuse – or at the least, child neglect. And there were those who loved this idea of mechanization – science! progress! – and wanted to buy one, reflecting post-war America's growing love of gadgetry in the home, in the workplace, and in the school.



    As history of psychology professor Alexandra Rutherford has argued, what Skinner developed were "technologies of behavior." The air crib, the teaching machine, "these inventions represented in miniature the applications of the principles that Skinner hoped would drive the design of an entire culture," she writes. He imagined this in his novel [i]Walden Two[/i], a utopian (I guess) novel in which he envisaged a community that had been socially and environmentally engineered to reinforce survival and "good behavior." But this wasn't just fiction for Skinner; he designed technologies that would improve human behavior, he argued – all in an attempt to re-engineer the entire social order and to make the world a better place.



    "The most important thing I can do," Skinner famously said, "is to develop the social infrastructure to give people the power to build a global community that works for all of us," adding that he intended to develop "the social infrastructure for community – for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all."



    Oh wait. That wasn't B. F. Skinner. That was Mark Zuckerberg. My bad.



    I would argue, in total seriousness, that one of the places that Skinnerism thrives today is in computing technologies, particularly in "social" technologies. This, despite the field's insistence that its development is a result, in part, of the cognitive turn that supposedly displaced behaviorism.



    B. J. Fogg and his Persuasive Technology Lab at Stanford is often touted by those in Silicon Valley as one of the "innovators" in this "new" practice of building "hooks" and "nudges" into technology. These folks like to point to what's been dubbed colloquially "The Facebook Class" – a class Fogg taught in which students like Kevin Systrom and Mike Krieger, the founders of Instagram, and Nir Eyal, the author of [i]Hooked[/i], "studied and developed the techniques to make our apps and gadgets addictive," as [i]Wired[/i] put it in a recent article talking about how some tech executives now suddenly realize that this might be problematic.



    (It's worth teasing out a little – but probably not in this talk, since I've rambled on so long already – the difference, if any, between "persuasion" and "operant conditioning" and how they imagine to leave space for freedom and dignity. Rhetorically and practically.)



    I'm on the record elsewhere arguing this framing – "technology as addictive" – has its problems. Nevertheless it is fair to say that the kinds of compulsive behavior that we display with our apps and gadgets is being encouraged by design. All that pecking like well-trained pigeons.



    These are "technologies of behavior" that we can trace back to Skinner – perhaps not directly, but certainly indirectly due to Skinner's continual engagement with the popular press. His fame and his notoriety. Behavioral management – and specifically through operant conditioning – remains a staple of child rearing and pet training. It is at the core of one of the most popular ed-tech apps currently on the market, ClassDojo. Behaviorism also underscores the idea that how we behave and data about how we behave when we click can give programmers insight into how to alter their software and into what we're thinking.



    If we look more broadly – and Skinner surely did – these sorts of technologies of behavior don't simply work to train and condition individuals; many technologies of behavior are part of a broader attempt to reshape society. "For your own good," the engineers try to reassure us. "For the good of the global community," as Zuckerberg would say. "For the sake of the children."
  • Cheating, Policing, and School Surveillance by on Tue, 06 Oct 2020 21:00:00 +0000:
    [i]I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke this morning to Jeffrey Austin's class at the Skyline High School. Yes, I said "shit" multiple times to a Zoom full of high school students. I'm a terrific role model.[/i]



    Thank you very much for inviting me to speak to you today. I am very fond of writing centers, as I taught writing — college composition to be exact — for a number of years when I was a graduate student at the University of Oregon. The state of Oregon requires two terms of composition for graduation, thus Writing 121 and 122 were courses that almost all students had to take. (You could test out of the former, but not the latter.) Many undergraduates dreaded the class, some procrastinating until they were juniors or seniors in taking it — something that defeated the point, in so many ways, of helping students become better writers of college essays.



    The University of Oregon also required that the graduate students who mostly taught these classes take a year long training in composition theory and pedagogy. It’s still sort of shocking to me how many people receive their PhDs and go on to become professors without taking any classes in how to teach their discipline, but I digress…



    I don’t recall spending much time talking about plagiarism as I learned how to teach writing — this was all over twenty years ago, so forgive me — although I am sure the topic came up. We were obligated, the professor who ran the writing department said, to notify her if we found a student had cheated. These were the days before TurnItIn and other plagiarism checking software were widely adopted by schools. (These were the early days of the learning management system software, however, and the UO was an early adopter of Blackboard — that's a different albeit related story.) If we suspected cheating, we had a couple of options. Google a sentence or two to see if the passage could be found online. Ask our colleagues if the argument sounded familiar. Confront the student.



    I always found it pretty upsetting when I suspected a student had plagiarized a paper. I tried to structure my assignments in such a way to minimize it — asking students turn in a one word sentence describing their argument and then a draft so that I could see their thinking and writing take shape. I felt as though I’d not done enough to guide a student toward success when they turned to essay mills or friends to plagiarize. I wanted to support students, not police them.



    When I talked to students about cheating, I found it was rarely a matter of not understanding how to properly cite their sources. Nor was it really a lazy attempt to get out of doing work. Rather, students were under pressure. They hated writing. They weren't confident they had anything substantive to say. They were struggling with other classes, with time management, with aspects of their lives outside of schooling — jobs, family, and so on. Students needed more support, not more surveillance, which as I find myself repeating a lot these days, is too often confused with care.



    I never met a student who was a Snidely Whiplash of cheating, twisting his mustache and rubbing his hands together with glee as he displayed callous and wanton academic dishonesty.



    But that’s how so much of the educational software that’s sold to schools to curb cheating seems to imagine students — all students: unrepentant criminals, who must be rooted out and punished. Guilty until TurnItIn proves you innocent.



    Let me pause and say that, as I prepared these remarks, I weighed whether or not I wanted to extend the Snidely Whiplash reference and refer to anti-cheating software as the Dudley Do-Right of ed-tech — that bumbling, incompetent Mountie, who was somehow seen as the hero. But I don't really know how familiar you are with the cartoon Dudley Do-Right. And I don't want to upset any Canadians.



    I will, however, say this: anti-cheating software, whether it’s plagiarism detection or test proctoring — is “cop shit.” And cops do not belong on school grounds.



    I don't know much about your organization. I don't know how controversial those two sentences are: first, my claim that ed-tech is “cop shit,” and second, that police should be removed from schools.



    Since the death of George Floyd this summer, the calls to eliminate school police have grown louder. It's been a decades-long fight, in fact, in certain communities, but some schools and districts, such as in Oakland where I live, have successfully started to defund and dismantle their forces. It's an important step towards racial justice in schools, because [url=https://www.aclu.org/news/criminal-law-reform/police-in-schools-continue-to-target-black-brown-and-indigenous-students-with-disabilities-the-trump-administration-has-data-thats-likely-to-prove-it/]we know[/url] that school police officers disproportionately target and arrest students with disabilities and students of color. Students of color are more likely to attend schools that are under-resourced, for example, without staff trained to respond to behavioral problems. They are more likely to attend a school with a police officer. Black students are twice as likely to be referred to law enforcement by schools as white students. Black girls are over eight times more likely to be arrested at school as white girls. During the 2015-2016 school year, some 1.6 million students attended a school with a cop but no counselor. Defunding the police then means reallocating resources so that every child can have access to a nurse, a counselor — to a person who cares.



    But what does this have to do with ed-tech, you might be thinking…



    In many ways, education technology merely reinscribes the beliefs and practices of the analog world. It's less a force of disruption than it is a force for the consolidation of power. So if we know that school discipline is racist and ableist, then it shouldn't surprise us that the software that gets built to digitize disciplinary practices is racist and ableist too.



    Back in February, Jeffrey Moro, a PhD candidate in English at the University of Maryland, wrote a very astute blog post "[url=https://jeffreymoro.com/blog/2020-02-13-against-cop-shit/]Against Cop Shit[/url]" in the classroom.



    "For the purposes of this post," Moro wrote, "I define 'cop shit' as 'any pedagogical technique or technology that presumes an adversarial relationship between students and teachers.' Here are some examples:



    ed-tech that tracks our students' every move
    plagiarism detection software
    militant tardy or absence policies, particularly ones that involve embarrassing our students, e.g. locking them out of the classroom after class has begun
    assignments that require copying out honor code statements
    'rigor,' 'grit,' and 'discipline'
    any interface with actual cops, such as reporting students' immigration status to ICE and calling cops on students sitting in classrooms.



    The history of some of these practices is quite long. But I think that this particular moment that we're in right now has greatly raised the stakes with regards to the implications of "cop shit" in schools.



    [url=http://hackeducation.com/2017/02/02/ed-tech-and-trump]In the very first talk I gave[/url] during the Trump Administration — just 13 days after his inauguration — I warned of the potential for ed-tech to be used to target, monitor, and profile at-risk students, particularly undocumented and queer students. I didn't use the phrase "cop shit," but I could clearly see how easy it would be for a strain of Trumpism to amplify the surveillance technologies that already permeated our schools. Then, in February 2017, I wanted to sound the alarm. Now, almost four years later, it's clear we need to do more than that. We need to dismantle the surveillance ed-tech that already permeates our schools. I think this is one of our most important challenges in the months and years ahead. We must abolish "cop shit," recognizing that almost all of ed-tech is precisely that.



    Why do we have so much "cop shit" in our classrooms, Jeffrey Moro asks in his essay. "One provisional answer is that the people who sell cop shit are very good at selling cop shit," he writes, "whether that cop shit takes the form of a learning management system or a new pedagogical technique. Like any product, cop shit claims to solve a problem. We might express that problem like this: the work of managing a classroom, at all its levels, is increasingly complex and fraught, full of poorly defined standards, distractions to our students' attentions, and new opportunities for grift. Cop shit, so cop shit argues, solves these problems by bringing order to the classroom. Cop shit defines parameters. Cop shit ensures compliance. Cop shit gives students and teachers alike instant feedback in the form of legible metrics."



    Ed-tech didn't create the "cop shit" in the classroom or launch a culture of surveillance in schools by any means. But it has facilitated it. It has streamlined it.



    People who work in ed-tech and with ed-tech have to take responsibility for this, and not just shrug and say it's inevitable or it's progress or school sucked already and it's not our fault. We have to take responsibility because we are facing a number of crises — some old and some new — that are going to require us to rethink how and why and if we monitor and control teachers and students — [i]which[/i] teachers and students. Because now, the “cop shit" that schools are being sold isn't just mobile apps that track whether you've completed your homework on time or that assess whether you cheated when you did it. Now we're talking about tracking body temperature. Contacts. Movement. And as I feared back in early 2017, gender identity. Immigration status. Political affiliation.



    Surveillance in schools reflects the values that schools have (unfortunately) prioritized: control, compulsion, distrust, efficiency. Surveillance is necessary, or so we've been told, because students cheat, because students lie, because students fight, because students disobey, because students struggle. Much of the physical classroom layout, for example, is meant to heighten surveillance and diminish cheating opportunities: the teacher in a supervisory stance at the front of the class, wandering up and down the rows of desks and peering over the shoulders of students. (It's easier, I should note, to shift the chairs in your classroom around than it is to shift the code in your software.) And all of this surveillance, we know, plays out very differently for different students in different schools — which schools require schools to walk through metal detectors, which schools call the police for disciplinary infractions, which schools track what students do online, even when they're at home. And nowadays, [i]especially[/i] when they're at home.



    [url=https://www.washingtonpost.com/nation/2020/09/08/black-student-suspended-police-toy-gun/]Last month[/url], officials in Edgewater, Colorado called the cops on a 12 year old boy who held a toy gun during his Zoom class. He was suspended from school.



    Digital technology companies like to say that they're increasingly handing over decision-making to algorithms — it's not that the principal called the cops; the algorithm did. Automation is part of the promise of surveillance ed-tech — that is, the automation of the work of disciplining, monitoring, grading. That way, education gets cheaper, faster, better.



    We've seen lately, particularly with the switch to online learning, a push for the automation of cheating prevention. Proctoring software is some of the most outrageous "cop shit" in schools right now.



    These tools gather and analyze far more data than just a student's responses on an exam. They require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student's "biometric faceprint" captured by the laptop camera. Some of these products also capture a student's keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session — the background sounds and scenery from a student's home. Some ask for a tour of the student's room to make sure there aren't "suspicious items" on the walls or nearby.



    The proctoring software then uses this data to monitor a student's behavior during the exam and to identify patterns that it infers as cheating — if their eyes stray from the screen too long, for example. The algorithm — sometimes in concert with a human proctor — determines who is a cheat. But more chilling, I think, the algorithm decides who suspicious, what is suspicious.



    We know that algorithms are biased, because we know that humans are biased. We know that facial recognition software struggles to identify people of color, for example, and there have been reports from students of color that the proctoring software has demanded they move into more well-lit rooms or shine more light on their faces during the exam. Because the algorithms that drive the decision-making in these products is proprietary and "black-boxed," we don't know if or how it might use certain physical traits or cultural characteristics to determine suspicious behavior.



    We know there is a long and racist history of physiognomy and phrenology that has attempted to predict people's moral character from their physical appearance. And we know that schools have a long and racist history too that runs adjacent to this, as do technology companies — and this is really important. We can see how the mistrust and loathing of students is part of a proctoring company culture and gets baked into a proctoring company's software when, for example, the CEO posts copies of a student's chat logs with customer service onto Reddit, [url=https://www.theguardian.com/australia-news/2020/jul/01/ceo-of-exam-monitoring-software-proctorio-apologises-for-posting-students-chat-logs-on-reddit]as the head of Proctorio did in July[/url]. (Proctorio is also suing an instructional technologist from British Columbia for sharing links to unlisted YouTube videos on Twitter.)



    That, my friends, is some serious "cop shit." If we believe that cops have no business in schools — and the research certainly supports that — then hopefully we can see that neither does Proctorio.



    But even beyond that monstrous company, we have much to unwind within the culture and the practices of schools. We must move away from a culture of suspicion and towards one of trust. That will demand we rethink "cheating" and cheating prevention.



    Indeed, I will close by saying that — as with so much in ed-tech — the actual tech itself may be a distraction from the conversation we should have about what we actually want teaching and learning to look like. We have to change the culture of schools not just adopt kinder ed-tech. We have to stop the policing of classrooms in all its forms and support other models for our digital and analog educational practices.
  • Selling the Future of Ed-Tech (& Shaping Our Imaginations) by on Tue, 15 Sep 2020 21:00:00 +0000:
    [i]I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke briefly tonight in Anna Smith's class on critical approaches to education technology (before a really excellent discussion with her students). I should note that I talked through my copy of [/i]The Kids' Whole Future Catalog[i] rather than, as this transcript suggests, using slides. Sorry, that means you don't get to see all the pictures...[/i]



    Thank you very much for inviting me here today. (And thank you for offering a class on critical perspectives on education and technology!)



    In the last few classes I've visited, I've talked a lot about surveillance technologies and ed-tech. I think it's one of the most important and most horrifying trends in ed-tech — one that extends beyond test-proctoring software, even though, since the pandemic and the move online, test-proctoring software has been the focus of a lot of discussions. Even though test-proctoring companies like to sell themselves as providing an exciting, new, and necessary technology, this software has a long history that's deeply intertwined with pedagogical practices and beliefs about students' dishonesty. In these class talks, I've wanted to sound the alarm about what I consider to be an invasive and extractive and harmful technology but I've also wanted to discuss the beliefs and practices — and the [i]history[/i] of those beliefs and practices — that might prompt someone to compel their students to use this technology in the first place. If nothing else, I've wanted to encourage students to ask better questions about the promises that technology companies make. Not just "can the tech fulfill these promises?", but "why would we want them to?"



    In my work, I write a lot about the "ed-tech imaginary" — that is, the ways in which our beliefs in ed-tech's promises and capabilities tend to be governed as much by fantasy as by science or pedagogy. Test-proctoring software, for example, does not reliably identify cheating — and even if it did, we'd want to ask a bunch of questions why we'd want it or need it.



    Arguably one could say something similar about education as a whole: our beliefs about it are often unmoored from reality. Although we all have experiences in the classroom — as students, if nothing else — many of us hold onto some very powerful, fictional narratives about what schools look like, what they looked like in the past, what they will look like in the future.







    When I was a kid, my brother had this book called [i]The Kids' Whole Future Catalog[/i]. The name was purposefully reminiscent of [i]The Whole Earth Catalog[/i], which was an important counterculture publication from the late 1960s (through the late 1980s) — one that was very influential on the development of early cyberculture. Like [i]The Whole Earth Catalog[/i], [i]The Kids Whole Future Catalog[/i] was a mail-order catalog of sorts. Each page had addresses where you could write away for products and infomation. And like [i]The Whole Earth Catalog[/i], [i]The Kids Whole Future Catalog[/i] claimed to be inspired by the work of Buckminster Fuller, the well-known futurist and architect. (The inventor of the geodesic dome and author of a series of lectures on the automation of education, among other things.)



    The book was published in 1982. I was 11; my brother was 8. 1982 was two years into the Reagan Presidency and two years before the broadcast of [i]The Day After[/i] convinced me that there would be no future and that we'd all likely die in a nuclear holocaust. To be honest, I don't actually recall reading the parts of the book that have to do with the future of school. I remember reading the parts that said that in the future insects would be a major source of protein and that in the future we'd either live in space colonies or underground. The future sounded pretty terrible.



    The book offered six areas that might signal what "future learning" would look like, some of which were grounded in contemporary developments of the Eighties and some of which sound quite familiar today:



    [b][i]"New Ways to Learn"[/b] — "Put down those pencils… close your books… and come visit a new kind of school. It doesn't look like a school at all. There aren't any desks, books, pencils, or paper. Instead, there are games to play, experiments to try, things to climb on and down and through, drawers that say 'open me,' and displays marked 'please touch'. There aren't any teachers to tell you what you're supposed to do — just 'explainers' who walk around answering questions. What would you like to do first?"[/i]



    I'd ask you to think about these predictions — not just how realistic were they or are they, but who do you think makes these predictions, who do these futures benefit, and who do they ignore?







    [b][i]"Schools on the Move"[/b] — "Classes will never be boring on an airship traveling around the world!"[/i]


    Again, whose future would this be? And is that rubble the airship is sailing over?







    [i]"A Robot for a Teacher"[/i] — [i]This section showcases Leachim, a robot created by Michael Freeman to help his wife, a fourth-grade teacher. Leachim's "computer brain is packed with information from seven textbooks, a dictionary, and a whole set of encyclopedias. He knows all the names of all the children in the class, their brothers and sisters, and their parents and pets. He can talk with them about their hobbies and interests. And he can work with up to five students at once, speaking to each of them separately through headsets and showing them pictures on the Tableau screen to his right."[/i]



    Freeman actually commodified a version of this teacher robot — the 2-XL, an early "smart toy" that played an 8-track tape with multiple choice and true-false questions.



    Let's note the inconsistencies in these predictions. On one hand, there are no more schools. Everything is hands-on learning. On the other hand, robots will still administer students multiple choice tests. Some people simply cannot imagine a world without multiple choice tests.



    [b][i]"Computers Will Teach You… and You Will Teach Computers!"[/i][/b]



    Again, what happens to human teachers in the future?



    [i]"Simulations" — "Beginning pilots can use flight simulators to experience takeoff, flight, and landing without ever leaving the ground. A computer-controlled wide-screen film system gives them the impression that they're flying in all kinds of weather and encountering problems that actual pilots face."[/i]



    Incidentally, the first flight simulator was invented in 1929. So it's not that the prediction about simulations isn't a good one; it's that to say that this is "the future" is just so historically inaccurate.







    [b][i]"Exploring Inner Space"[/i][/b]



    This is arguably the weirdest one. The future of learning will be imaging exercises, according to [i]The Kids' Whole Future Catalog[/i], arguably akin to the "mindset" and "social emotional learning" craze of today. Students of the future will learn and work through their dreams, create dream journals. Students will work with all their senses, including psychic experiences. This section includes Masuaki Kiyota, a Japanese psychic known for "thoughtography" — that is, taking pictures of what's in the mind — and Uri Geller, the Soviet psychic would could bend spoons with his mind. Both have since been found to be frauds (although in fairness to Geller, I suppose, he's now described as a magician and not a psychic.) In the future, students will practice clairvoyance, precognition, and telepathy — ironically, these are the sorts of things that surveillance ed-tech like proctoring software promises too.







    [b][i]"Lifelong learning"[/b] — "When will you finish your education? When you're 16? 18? 22? 30? People who are asked this question in the future will probably answer 'never.' They'll need a lifetime of learning in order to keep up with their fast-changing world. Instead of working at the same job until retirement, people will change jobs frequently. In between jobs, they'll go to school — but not to the kinds of schools we know today. Grade schools, high schools and colleges will be replaced by community learning centers where people of all ages will meet to exchange ideas and information."[/i]



    Retirement, according to this section, will happen at age 102. No indication if death occurs at the same time.



    A bit of a side-note here: I started down a rabbit hole yesterday, searching for the origins of the phrase "lifelong learning," because once again it's become a phrase you hear all the time from pundits and politicians. It struck me as I was re-reading [i]The Kids' Whole Future Catalog[/i] to prepare for class tonight that this push for "lifelong learning" in the 1980s might be connected to Reaganomics, if you will — to the need for job training in a recession, to restrictions to Social Security, to anti-unionization efforts, and to the corporations' lack of commitment to their employees. What then might we make of the ed-tech imaginary when it comes to "lifelong learning" then or now: where did these ideas come from? Who's promoting, who's promoted them? Interestingly, one of the popularizers of "lifelong learning" was UNESCO which promoted "lifelong education" in its 1972 Faure Report. A competing vision of the future of work and education was offered by the OECD. There's a much longer story here that I'll spare you (for now. I'm going to write an essay on this). But it's worth thinking about how powerful organizations push certain narratives about the future — narratives that support their policy and political agendas. (These are the same folks who like to tell you that "[url=https://longviewoneducation.org/field-guide-jobs-dont-exist-yet/]65% of children entering school today will eventually hold jobs that don't exist yet[/url]" — a claim that is, quite simply, made-up bullshit.)



    [i]The Kids' Whole Future Catalog[/i] feels a lot like made-up bullshit too. But I don't think that means it's unworthy of critical analysis. Indeed, invented narratives are often rich sources to study.



    You can chuckle, I suppose, that a 1982 children's book — "a book about your future" — might be intertwined with narratives and ideology. I admit I wonder who else might have had this guide on their bookshelf. Marc Andreessen, the developer of the first Web browser and now a venture capitalist, was born the same year as me. So was Elon Musk. Larry Page, the co-founder of Google was born in 1973. So was his co-founder Sergei Brin. Jack Dorsey, the CEO of Twitter, was born in 1976. His co-founders, Evan Williams and Biz Stone were born in 1972 and 1974 respectively. Sal Khan, the founder of Khan Academy, was born in 1976. That is to say, there's quite a number of technology entrepreneurs — some with more or less connection to ed-tech — who were raised on a future of learning involving robot teachers and pseudoscience. And that's just tech entrepreneurs. That's not including politicians or filmmakers or professors born in the early 1970s.




    Other generations had other touch-points when it comes to the "ed-tech imaginary."







    [i]The Jetsons[/i] aired an episode with a robot teacher in 1963.







    A comic featuring "Push-Button Education" appeared in newspapers in 1958.



    Thomas Edison predicted in 1913 that textbooks would soon be obsolete — replaced, surprise surprise, by the films like he himself was marketing.







    To commemorate the World's Fair in Paris in 1900, a series of postcards were printed that imagined what the year 2000 would look like. One of these featured the school of the future — boys, wearing headsets, seated in rows of desks. The headsets were connected by wires to a machine into which the teacher fed textbooks. The knowledge from these was ostensibly ground up as another student turned the crank on the machine and then transmitted to the students' brains.



    And you can laugh. But this notion that we can send information through wires into people's brains is probably the most powerful and lasting idea about the future of learning:







    Think of that scene in [i]The Matrix[/i], where Neo, plugged into a machine so he can be fed educational programming straight into his brainstem: "Whoa. I know Kung Fu."



    How many folks think this is what ed-tech is striving for, what the future of learning will be like?
  • Robot Teachers, Racist Algorithms, and Disaster Pedagogy by on Thu, 03 Sep 2020 21:00:00 +0000:
    [i]I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke tonight in Dorothy Kim's class "Race Before Race: Premodern Critical Race Studies." Here's a bit of what I said...[/i]



    Thank you for inviting me to speak to your class this evening. I am sorry that we're all kicking off fall term online again — well, online is better than dead, of course. I know that this probably isn't how you imagined college would look. But then again, it's worth pausing and thinking about what we imagine school [i]should[/i] look like, what we [i]expect[/i] school to be like, and what school [i]is[/i] like — not just emergency pandemic Zoom-school, but the institution with all its histories and politics and practices and (importantly) variations. Not everywhere looks like Brandeis. Not everyone experiences Brandeis the same way.



    Me, I write about education technology for a living. I'm not an advocate for ed-tech; I'm not here to sell you on a new flashcard app or show you how to use the learning management system more effectively. I'm an ed-tech critic. That doesn't mean I just write about how ed-tech sucks — although it mostly does suck — it means that I spend my time thinking through the ways in which technology shapes education and education shapes technology and the two are shaped by ideologies, particularly capitalism and white supremacy. And I do so because I want us — all of us — to flourish; and too often both education and technology are deeply complicit in exploitation and destruction instead.



    These are not good times for educational institutions. Many are struggling, as I'm sure you know, to re-open. Many that have re-opened face-to-face are closing and moving online. Most, I'd wager, are facing severe funding shortages — the loss of tuition, dorm-room, and sportsball dollars, for example — as well as new expenses like PPE and COVID testing. Cities, counties, and states are all seeing massive budget shortfalls — and this is, of course, how most public schools (and not only at the K-12 level) are actually funded, not by tuition or federal dollars but by state and local allocations. (That's not to say that the federal government couldn't and shouldn't step up to bail out the public education system.)



    Schools have been doing "more with less" for a long while now. Many states had barely returned to pre-recession funding levels before the pandemic hit. And now in-person classes are supposed to be smaller. Schools need more nurses and teaching aides. And there just isn't the money for it. So what happens?



    Technology offers itself as the solution. Wait. Let me fix that sentence. Technology companies offer their products as the solution, and technology advocates promote the narrative of techno-solutionism.



    If schools are struggling right now, education technology companies — and technology companies in general — are not. Tech companies are dominating the stock market. The top four richest men in the world: all tech executives (Jeff Bezos, Bill Gates, Mark Zuckerberg, and Elon Musk — all of whom are education technology "philanthropists" of some sort as well, incidentally.) Ed-tech companies raised over [url=https://www.edsurge.com/news/2020-07-29-us-edtech-raises-803m-in-first-half-of-2020-as-covid-19-forces-learning-online]$800 million[/url] in the first half of this year alone. The promise of ed-tech — now as always: make teaching and learning cheaper, faster, more scalable, more efficient. And where possible, practical, and politically expedient: replace expensive, human labor with the labor of the machine. Replace human decision-making with the decision-making of an algorithm.



    This is already happening, of course, with or without the pandemic. Your work and your behavior, as students, are already analyzed by algorithms, many of these designed to identify when and if you cheat. Indeed, it's probably worth considering how much the fear of cheating is constitutive of ed-tech — how much of the technology that you're compelled to use is designed because the system — be that the school, the teachers, the structures or practices — do not trust you.



    For a long time, arguably the best known anti-cheating technology was the plagiarism detection software TurnItIn. The company was founded in 1998 by UC Berkeley doctoral students who were concerned about cheating in the science classes they taught. In particular, they were particularly concerned about the ways in which they feared students were utilizing a new feature on the personal computer: copy-and-paste. So they turned some of their research on pattern-matching of brainwaves to create a piece of software that would identify patterns in texts. TurnItIn became a huge business, bought and sold several times over by private equity firms since 2008: first by Warburg Pincus, then by GIC, and then, in 2014, by Insight Partners — the price tag for that sale: $754 million. TurnItIn was acquired by the media conglomerate Advance Publications last year for $1.75 billion.



    That price-tag should prompt us to ask: what's so valuable about TurnItIn? Is it the size of the customer base — the number of schools and universities that pay to use the product? Is it the algorithms — the pattern-matching capabilities that purport to identify plagiarism? Is it the vast corpus of data that the company has amassed — decades of essays and theses and Wikipedia entries that it uses to assess student work?



    TurnItIn has been challenged many times by students who've complained that it violates their rights to ownership of their work. A judge ruled, however, in 2008 that students' copyright was not infringed upon as they'd agreed to the Terms of Service. But that seems a terribly flawed decision, because what choice does one have but to click "I agree" when one is compelled to use a piece of software by one's professor, one's school? What choice does one have when the whole process of assessment is intertwined with this belief that students are cheaters and thus with a technology infrastructure that is designed to monitor and curb their dishonesty?



    Every student is guilty until the algorithm proves her innocence.



    Incidentally, one of its newer products promise to help students [i]avoid[/i] plagiarism, and so essay mills now also use TurnItIn so they can promise to help students avoid getting caught cheating. The company works both ends of the plagiarism market.



    Anti-cheating software isn't just about plagiarism, of course. No longer does it just analyze students' essays to make sure the text is "original." There is a growing digital proctoring industry that offers schools way to monitor students during online test-taking. Well-known names in the industry include ProctorU, Proctorio, and Examity. Many of these companies were launched circa 2013 — that is, in the tailwinds of "the Year of the MOOC" — with the belief that an increasing number of students would be learning online and that professors would demand some sort of mechanism to verify their identity and their integrity. According to one investment company, the market for online proctoring was expected to reach $19 billion last year — much smaller than the size of the anti-plagiarism market, for what it's worth, but one that investors see as poised to grow rapidly, particularly in the light of schools' move online because of COVID.



    These proctoring tools gather and analyze far more data than just a student's words, than their responses on an exam. They typically require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student's "biometric faceprint" captured by the laptop camera. Some of these products also capture a student's keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session — the background sounds and scenery from a student's home.



    The proctoring software then uses this data to monitor a student's behavior during the exam and to identify patterns that it infers as cheating — if their eyes stray from the screen too long, for example, or if there are sticky notes on the wall, their "suspicion" score goes up. The algorithm — sometimes in concert with a human proctor — decides who is suspicious. The algorithm decides who is a cheat.



    We know that algorithms are biased, because we know that humans are biased. We know that facial recognition software struggles to identify people of color, and there have been reports from students of color that the proctoring software has demanded they move into more well-lit rooms or shine more light on their faces during the exam. Because the algorithms that drive the decision-making in these products is proprietary and "black-boxed," we don't know if or how it might use certain physical traits or cultural characteristics to determine suspicious behavior.



    We do know there is a long and racist history of physiognomy and phrenology that has attempted to predict people's moral character from their physical appearance. And we know that schools have a long and racist history too that runs adjacent to this.



    Of course, not all surveillance in schools is about preventing cheating; it's not all about academic dishonesty — but it is always, I'd argue, about monitoring [i]behavior[/i] and [i]character[/i] (and I imagine in this class you are talking about the ways in which institutional and interpersonal assessments of behavior and character are influenced by white supremacy). And surveillance is always caught up in the inequalities students already experience in our educational institutions.



    For the past month or so, there's been a huge controversy in the UK over a different kind of algorithmic decision-making. As in the US, schools in the UK were shuttered in the spring because of the coronavirus. Students were therefore unable to sit their A Levels, the exams that are the culmination of secondary education there. These exams are a Very Big Deal — even more so than the SAT exams that many of you probably took in high school. While the SAT exams might have had some influence on where you were accepted — I guess Brandeis is test-optional these days, so nevermind — A Levels almost entirely dictate where students are admitted to university. British universities offer conditional acceptances that are dependent on the students' actual exam scores. So, say, you are admitted to the University of Bristol, as long as you get two As and a B on your A Levels.



    No A Levels this spring meant that schools had to come up with a different way to grade students, and so teachers assigned grades based on how well the student had done so far and how well they thought the student would do, and then Ofqual (short for Office of Qualifications and Examinations Regulation), the English agency responsible for these national assessments, adjusted these grades with an algorithm — an algorithm designed in part to avoid grade inflation (which, if you think about it, is just another form of this fear of cheating but one that implicates teachers instead of students).



    [url=https://www.theguardian.com/education/2020/aug/13/almost-40-of-english-students-have-a-level-results-downgraded]Almost 40%[/url] of teacher-assigned A-Level grades were downgraded by at least one grade. Instead of getting those two As and a B that you expected to get and that would get you into Bristol, the algorithm gave you an A, a B, and a C. No college admission for you.



    In part, Ofqual's algorithm used the history of schools' scores to determine students' scores. Let me pause there so you can think about the very obvious implications. [url=https://www.theguardian.com/education/2020/aug/13/who-won-and-who-lost-when-a-levels-meet-the-algorithm]It's pretty obvious[/url]: the model was more likely to adjust the scores of students attending private schools upward, because students at private schools, historically, have performed much better on their A Levels. (As someone who attended a private school in England, I can guarantee you that it's not that they're smarter.) Ofqual's algorithm adjusted the scores of students attending the most disadvantaged state schools downward, because students at state schools, historically, have not performed very well. (I don't want to get too much into race and class and the British education system, but sufficed to say, about 7% of the country attends private schools and graduates from those schools make up about 40% of top jobs, including government jobs.) Overall, the scores of students in the more affluent regions of London, the Midlands, and Southeast England were adjusted so that that they rose more than the scores of students in the North, which has, for a very long time (maybe always?) been a more economically depressed part of the country.



    At first, the British government — which does rival ours for its sheer idiocy and incompetence — refused to admit there was a problem or to change the grades, even arguing there was no systemic bias in the revised exam scores because, [url=https://www.theguardian.com/education/2020/aug/13/england-a-level-downgrades-hit-pupils-from-disadvantaged-areas-hardest]according to one official[/url], teachers grade poor students too leniently — something that the algorithm was designed to address. But students took to the streets, chanting "Fuck the algorithm," and the government quickly backed down, fearing that not only might it alienate the youth but also their families. Grades were reverted to those given by teachers, not the algorithm, and university spots were given back to those who'd had their offers rescinded.



    I should note here that there was nothing particularly complex about the A-Level algorithm. This wasn't artificial intelligence or complex machine learning that decided students' grades. It was really just a simple formula, probably calculated in an Excel spreadsheet. (That doesn't make this situation any better, of course.)



    The A-Level algorithm is part of what Ruha Benjamin calls the "new Jim Code," the racist designs of our digital architecture. And I think what we can see in this example is the ways in which pre-digital policies and practices get "hard-coded" into new technologies. That is, how long-running biases in education — biases about race, ethnicity, national origin, class, gender, religion, and so on — are transfered into educational technologies.



    Lest you think that the fiasco in the UK will give education technologists and education reformers pause before moving forward with algorithmic decision-making and algorithmic surveillance, [url=https://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database/Grants/2020/08/INV-006163]the Gates Foundation last month awarded[/url] the Educational Testing Service (which runs the GRE exam) a $460,000 grant to "to validate the efficacy of Automated Essay Scoring software in improving student outcomes in argumentative writing for students who are Black, Latino, and/or experiencing poverty."



    A couple of days ago, I saw [url=https://twitter.com/DanaJSimmons/status/1300639757165191170]a series of tweets from a parent[/url], complaining that her junior high school-age son had gone from loving history class to hating it — "tears," "stress," "self-doubt," after the first auto-graded assignment he turned in gave him a score of 50/100. The parent, a professor at USC, showed him how to game the software: write long answers, use lots of proper nouns. His next score was 80/100. An algorithm update one day later: "He cracked it: Two full sentences, followed by a word salad of all possibly applicable keywords. 100% on every assignment. Students on @Edgenuityinc, there's your ticket. He went from an F to an A+ without learning a thing." (Sidenote: in 2016, Alabama state congressperson Mike Hubbard was found guilty of 12 counts of felony ethics violations, including receiving money from Edgenuity. Folks in ed-tech are busy trying to stop students from cheating while being so shady themselves.)



    I tweeted in response to the homework algorithm "hack" that if it's not worth a teacher reading the assignment/assessment, then it's not worth the student writing it. That robot grading is degrading. I believe that firmly. (Again, think of that Gates grant. Who has a teacher or peer read their paper, and who gets a robot?) Almost a thousand people responded to my tweet, most agreeing with the sentiment. But a few people said that robot grading was fine, particularly for math and that soon enough it would work in the humanities too. "Manual grading is drudgery that consumes time and energy we could spend elsewhere," [url=https://twitter.com/jweisber/status/1301210706751172608]one professor responded[/url]. And again, I disagreed, because I think it's important to remember, if nothing else, that if it's drudgery for teachers it's probably drudgery for students too. People did not like that tweet so much, and many seemed to insist that drudgery was a necessary part of learning.



    And so, there you go. We've taken that drudgery of analog worksheets and we've made that drudgery digital and we call that "progress." Ed-tech promises it can surveil all the clicks and taps that students make while filling out their digital worksheets, calculating how long they spend on their homework, where they were when it was completed, how many times they tabbed out to play a game instead, how their score compares to other students, whether they're "on track" or "falling behind," claiming it can predict whether they'll be a good college student or a good employee. Ed-tech wants to gamify admissions, hiring, and probably if we let it, the school-to-prison pipeline.



    I won't say "it's up to you," students, to dismantle this. That's unfair. Rather it is up to all of us, I think — faculty, students, citizens, alike — to chant "Fuck the algorithm" a lot more loudly.
  • Pigeon Pedagogy by on Wed, 29 Jul 2020 21:00:00 +0000:
    [i]These were my remarks today during my "flipped" keynote at DigPed. You can read the transcript of my keynote [url=http://hackeducation.com/2020/07/29/luddite-sensibilities]here[/url].[/i]



    We haven't had a dog in well over a decade. Kin and I travel so much that it just seemed cruel. But now, what with the work-from-home orders and no travel til there's a vaccine (and even perhaps, beyond that), we decided to get one.



    It's actually quite challenging to adopt a dog right now, as everyone seems to be of the same mind as us. And even before the pandemic, there's a big of a dog shortage in the US. Spay-and-neuter programs have been quite effective, and many states have passed laws outlawing puppy mills. The West Coast generally imports dogs from other parts of the country, but these rescue-relocations have largely been shut down. The shelters are pretty empty.



    It's a great time to be a dog.



    Adopting a dog is quite competitive, and we have been on multiple waiting lists. But finally, we lucked out, and last week we adopted Poppy. She is a 9 month old Rottie mix. She weighs about 55 pounds. She is not housebroken yet — but we're getting there. She's very sweet and super smart and is already getting better on the leash, at sitting when in the apartment elevator, at sitting at street corners, at sitting when people and other dogs approach her. It's important, I think, if you have a big dog, that you train them well.



    If you have a dog, you probably know that the best way to train it is through positive behavior reinforcement. That is, rather than punishing the dog when she misbehaves, the dog should be rewarded when she exhibits the desired behavior. This is the basis of operant conditioning, as formulated by the infamous psychologist B. F. Skinner.



    The irony, of course. I've just finished a book on the history of teaching machines — a book that argues that Skinner's work is fundamental to history, to how ed-tech is still built today. Ed-tech is operant conditioning, and we should do everything to resist it, and now I'm going to wield it to shape my dog's behavior.



    Some background for those who don't know: As part of his graduate work, Skinner invented what's now known as "the Skinner Box." This "operant conditioning chamber" was used to study and to train animals to perform certain tasks. For Skinner, most famously, these animals were pigeons. Do the task correctly; get a reward (namely food).



    Skinner was hardly the first to use animals in psychological experiments that sought to understand how the learning process works. Several decades earlier, for his dissertation research, the psychologist Edward Thorndike had built a "puzzle box" in which an animal had to push a lever in order to open a door and escape (again, often rewarded with food for successfully completing the "puzzle"). Thorndike measured how quickly animals figured out how to get out of the box after being placed in it again and again and again -- their "learning curve."



    We have in the puzzle box and in the Skinner Box the origins of education technology — some of the very earliest "teaching machines" — just as we have in the work of Thorndike and Skinner, the foundations of educational psychology and, as Ellen Condliffe Lagemann has pronounced in her famous statement "Thorndike won and Dewey lost," of many of the educational practices we carry through to this day. (In addition to developing the puzzle box, Thorndike also developed prototypes for the multiple choice test.)



    "Once we have arranged the particular type of consequence called a reinforcement," Skinner wrote in 1954 in "The Science of Learning and the Art of Teaching," "our techniques permit us to shape the behavior of an organism almost at will. It has become a routine exercise to demonstrate this in classes in elementary psychology by conditioning such an organism as a pigeon.”



    "[i]...Such an organism as a pigeon[/i]." We often speak of "lab rats" as shorthand for the animals used in scientific experiments. We use the phrase too to describe people who work in labs, who are completely absorbed in performing their tasks again and again and again.



    In education and in education technology, students are also the subjects of experimentation and conditioning. Indeed, that is the point. In Skinner's framework, they are not "lab rats"; they are [i]pigeons[/i]. As he wrote,




    ...Comparable results have been obtained with pigeons, rats, dogs, monkeys, human children… and psychotic subjects. In spite of great phylogenetic differences, all these organisms show amazingly similar properties of the learning process. It should be emphasized that this has been achieved by analyzing the effects of reinforcement and by designing techniques that manipulate reinforcement with considerable precision. Only in this way can the behavior of the individual be brought under such precise control.




    Learning, according to Skinner and Thorndike, is about behavior, about reinforcing those behaviors that educators deem "correct" — knowledge, answers, not just sitting still and raising one's hand before speaking (a behavior I see is hard-coded into this interface). When educators fail to shape, reinforce, and control a student's behavior through these techniques and technologies, they are at risk, in Skinner's words, of "losing our pigeon."



    In 1951, he wrote an article for [i]Scientific American[/i]: "How to Train Animals." I pulled it out again to prepare for this talk today and realized that it contains almost all the tips and steps that dog trainers now advocate for. Get a clicker. Use that as the conditioned reinforcer. But then give the treats and associate the click with the reward. (The clicker is faster.) You can train a dog anything in less than twenty minutes, Skinner insisted. And once you're confident with that, you can train a pigeon. And then you can train a baby. And then…



    Two years later after that article, Skinner came up with the idea for his teaching machine. Visiting his daughter's fourth grade classroom, he was struck by the inefficiencies. Not only were all the students expected to move through their lessons at the same pace, but when it came to assignments and quizzes, they did not receive feedback until the teacher had graded the materials -- sometimes a delay of days. Skinner believed that both of these flaws in school could be addressed through mechanization, and he built a prototype for his teaching machine which he demonstrated at a conference the following year.



    Skinner believed that materials should be broken down into small chunks and organized in a logical fashion for students to move through. The machine would show one chunk, one frame at a time, and if the student answered the question correctly, could move on to the next question. Skinner called this process "programmed instruction." We call it "personalized learning today." And yes, this involves a lot of clicking.



    Skinner is often credited with inventing the teaching machine. He didn't. Sidney Pressey, another educational psychologist, had built one decades beforehand. (Skinner said that Pressey's was more testing than teaching machine.) Despite who was or wasn't "the first," Skinner has shaped education technology immensely. Even though his theories have largely fallen out of favor in most education psychology circles, education technology (and technology more broadly) seems to have embraced them — often, I think, without acknowledging where these ideas came from. Our computer technologies are shot through with behaviorism. Badges. Notifications. Haptic alerts. Real-time feedback. Gamification. Click click click.



    According to Skinner, when we fail to properly correct behavior — facilitated by and through machines — we are at risk of "losing our pigeons." But I'd contend that with this unexamined behaviorist bent of (ed-)tech, we actually find ourselves at risk of losing our humanity. To use operant conditioning, Skinner wrote in his article on animal training "we must build up some degree of level and again reinforces only louder deprivation or at least permit a deprivation to prevail which it is within our power to reduce." That is, behaviorial training relies on deprivation. Behaviorist ed-tech relies on suffering — suffering that we could eliminate were we not interested in exploiting it to reinforce compliance. This pigeon pedagogy stands in opposition to the Luddite pedagogy I wrote in [url=http://hackeducation.com/2020/07/29/luddite-sensibilities]the text for this keynote[/url].



    So, here's to our all being "lost pigeons," and unlearning our training. But dammit, here's to Poppy learning to be a very good and obedient dog.
Views 15 views, 0 incoming clicks. Averaging 0 views and 0 incoming clicks per day.
Submission Date Nov 18, 2018


Members currently reading this thread:

Is this your listing? Update and claim it!

- Contact the Administration - Site Map - RSS Feed - Help/How to Submit URL - Link to Us -