The Trapper Keeper Has Left the Building


I am releasing a collection of scripts to support Open Content creation, and other forms of online research. The code is available under an open license. I wrote up some initial documentation, and will likely add to it over time.

The name of the project is Trapper Keeper, because of course.

The Details

Open Content is a concept to which I keep returning. Over the years, some of the work I have enjoyed the most has centered around open content and learner controlled portfolios, and how both of these approaches to learning create the potential for improving how we assess and understand learning.

The core features that support open content creation (as I see it, anyways) have a lot in common with research: find information on the web, collect that information, analyze that information, and create new work that incorporates that information and any relevant analysis or context. And over the years, I’ve had the chance to work with some incredibly awesome people on a range of projects that supported open content creation. I’ve also had the chance to incorporate some of this functionality into personal projects.

With the pandemic continuing largely unabated, I also had a need to put time into something that felt good. To be clear, I also wanted to save myself time, and the Trapper Keeper project meets both of these needs: in this project, I’m centralizing a bunch of tools I’ve been using piecemeal, and it hopefully will be useful to other people.

All the normal caveats apply

I am not a software developer. I should not write code. But, I wrote code, and I’m releasing it under an open license.

This code works, but that is not the same as it being good. Use it at your own risk, and I welcome any improvements/pull requests.

This code should be considered pre-Alpha. I have tested it against some narrow uses, but there will be cases where it fails miserably. See the above note about where pull requests and improvements are welcome.

Reducing Risk Is Not the Same as Remote Learning

Image Credit:

It’s human nature to try and reduce things down to a binary – a yes/no, either/or – but reality tends to be more both/and with a bit of everything in between. It’s messy, it’s inefficient – about the only thing that rejecting binary thinking has going for it is that it’s accurate.

The conversation – or really, the non-conversation – in education circles about the response to Omicron offer a painful demonstration of how little people want to move beyond binary thinking. We have a nearly infinite supply of fallacies related to binary thinking, but for this post I will focus on one:

Mitigating risk is not the same as “closing schools.”

Because Covid has been a political nightmare as well as a health nightmare from the very outset, reactionary viewpoints have taken up a lot of space in the conversation. With Omicron, we face multiple realities, some more painful than others:

a. Omicron was going to hit hard no matter what we did. While the impact can be blunted, even the best case outcomes aren’t great;
b. Because of a, it’s easy for people to pick apart any mitigation strategy as ineffective. Even if the strategy did help people, the worst case scenario isn’t available as a point of comparison because it didn’t happen. In the case of a surge as devastating as Omicron, the only visible outcome is the one we have in front of us, which isn’t good. Basically, a better outcome in a bad situation still feels like a bad outcome, which leads to a lot of:
c. People making the argument that mitigation and risk reduction don’t work.

And when we are two years into a pandemic that is still ripping through us, any argument that lets us think we can do nothing and still be fine is very attractive. We’re all exhausted, and we want to be done, but we’re not.

When people started proposing sensible risk reduction strategies to help blunt the impact of Omicron, policymakers and pundits doubled down on the standard arguments they have been using for the last several months and longer – straw man arguments such as kids don’t get Covid, kids are safer in school, kids have already lost so much, etc, etc. The relative merits, or utter lack of merits, of all of these arguments can be left for another writer or another post because for where are now — January 6, 2022 — these arguments and their ilk are completely irrelevant.

These arguments are predicated on data from and responses to earlier variants, and mitigation strategies that take place over months or for a full school year.

The situation we want to address is this specific spike in cases, with acute mitigation over the next couple weeks.

No one is talking about switching to remote learning as a universal strategy for an undefined time. If the 2020-2021 school year taught us anything, it’s that the decision makers who spent tens of millions of taxpayer dollars on EdTech to support “anywhere anytime on demand learning” were fleeced by EdTech vendors (again, a topic that needs more attention in a different post).

What we are talking about are creative, focused solutions that help reduce risk for as many people as possible while minimizing the impact on people’s lives. Many teachers have kids under 5, and when they are required to work in unsafe conditions it places them and their family members at risk. The pressures on parents to stay employed is real, and these pressures are not evenly distributed – some parents can easily support a kid during the school day where others don’t have that level of privilege.

A balanced solution could include:

  • asking parents who can support their kids at home to keep them home for a couple days to reduce crowding in classrooms;
  • targeted use of teacher professional development days and/or snowdays to give teachers time to prepare to supports students who need to quarantine;
  • providing N95 masks for staff and students;
  • providing portable HEPA filters to improve air quality in indoor spaces;
  • break lunch and snack times into smaller groups, and only eat in spaces with good air flow and portable HEPA filters;
  • use outdoor space for class and meals when possible;
  • if case numbers are high enough where it’s necessary to temporarily pause in person learning for every kid, priorize access to in-person learning for students who need it the most (elementary age, students with learning differences, children of essential workers, students for whom remote learning isn’t possible or realistic, etc);
  • implement pooled testing to get an accurate sense of where clusters of positive case could be happening

None of these things need to done forever. None of these things are closing schools.

The goal of any intervention is a targeted, balanced response to blunt the impact of the current spike in cases: reduce risk; minimize disruption. We know that layered protections are essential. We need to move past binary thinking and use the full range of tools at our disposal.

Fighting Misinformation: Debunk and Disengage


Disproving every lie that is embedded in misinformation is time consuming and exhausting, and prevents us from talking about the things that matter to us. Rather than disprove every element of misinformation, note the general inaccuracy of the misinformation, and move on.

The Details

When it comes to addressing misinformation, I point people to the Four Moves, also known as the SIFT method. SIFT is great because it’s concrete, fast, and efficient.

However, lies often come bundled and interwoven, like yarn balls mixed together by angry cats. While it’s possible to detangle the individual lies, it’s also time consuming and exhausting.

And this gets to one key question that often gets overlooked while fighting misinformation and delivering trainings to people about misinformation (and to be clear, this is not a shortcoming of SIFT, at all): correcting misinformation is an ongoing — and likely a neverending — process.

Refuting every detail means elevating every detail, which gives the details more airplay.

Fact-checking — even using SIFT — takes more time than lying.

If you fact check every lie, you probably won’t have time for much of anything else. This is especially true when the current lies build on past lies, and doing a thorough job of debunking the current lie requires a debunking of the context that supports the lie.

It’s laborious, it’s tiresome, and — from a practical perspective — it can derail you from having the conversations you want and need to have. The time pressure is true both in personal and professional contexts. Understanding misinformation in this way can help us see that misinformation is more than just lies and conspiracies. Misinformation is a distributed denial of service attack on rational thought.

This unfortunate reality plays out in the ongoing mess that is most commonly called “anti-Critical Race Theory” disruptions targeting school boards. The reality of this mess is that the conspiracies pushed by people are a mash of anti-vax meets anti-mask meets anti-LGBTQIA rights meets parental rights meets anti-“CRT” meets their perverted version of student privacy — and that many of these conspiracies are rooted in longstanding conspiracy theories about government takeovers (because remember — to many conspiracists, “public schools” are “government schools”), globalists (and the frequent companion lie of “UN Camps“), and other lies that have been festering for years, or decades. QAnon tapped into these theories, often with help from elected officials.

Tweet from Greg Abbot in 2015 legitimizing conspiracies about a preparedness exercise.

So, when confronting pernicious misinformation rooted and tangled in conspiracy narratives, how do we debunk the lies, maintain our sanity, and protect our energy, knowing that we will likely need to address more misinformation in the very near future?

Debunk and Disengage

Time and energy are valuable resources. Be intentional when you spend them. When we dedicate time to debunking misinformation, we are not dedicating time to other things. In particular, when misinformation impacts a topic we care about, addressing all the lies is tempting. Avoid the temptation.

When we look at fighting misinformation as something that we need to do on a regular basis over time, we can shift our focus in ways that preserve our time and energy. Eliminating misinformation will never happen — conspiracies existed before the internet, and the connectivity and amplification provided by the internet ensures that misinformation will be with us for the foreseeable future. YouTube alone hosts a reservoir of conspiracy theories going back years.

Given that misinformation is probably going to be here for a while, one goal of mitigating misinformation can be to blunt its impact, rather than disprove it outright. This means we don’t look at misinformation through a True/False binary — this lens helps people spreading misinformation more than it does those of us who prefer reality because the act of debunking misinformation can elevate the lie and prevent us from talking about other things (ie, misinformation as DDOS). In some cases, we can be effective with a quick debunking, followed by disengagement. This can minimize the impact on our time and energy, and the effectiveness of the lies embedded in the misinformation.

When addressing misinformation, debunk and disengage. Save your time. Save your energy.

On Dogwhistles, or Putting a Pin in This

Free speech is not consequence-free speech.

If you say hateful things and people respond to the hateful things you are saying, you are not being criticized for “speaking your mind.” You are being criticized for saying hateful things.

Image from

Racism, homophobia, transphobia, anti-Semitism, Islamophobia, xenophobia, and other forms of bigotry are hateful.

If you say hateful things and people ostracize you socially, and/or you have professional consequences, it’s not censorship. People have a right to not like what you say.

If you feel that “wokeness” or “cancel culture” are actual problems on the same level as racism, you should probably learn more about racism.

It’s not unkind or unfair to point out that a person has said hateful things publicly. Hateful things cause strong reactions.

If you are called out for saying hateful things and your response is that you were “asking questions” or “starting a conversation”, please understand that your desire to ask questions can lead to participants in the conversation you are starting to call you out for giving voice to hateful things.

If you are called out for saying hateful things and your response is that you were “only joking”, please understand that your desire to “make jokes” can lead people to call you out for giving voice to hateful things.

If, when you are “speaking your mind”, you are saying things that members of your audience find hateful, it might be time to examine the contents of your mind.

If people call you out for saying hateful things and your response is that you were “following your conscience,” you should probably examine the contents of your conscience.

When people attempt to tone police how people respond, they are showing themselves to be bad faith actors. Respond accordingly, preferably by not wasting time with them. Counter lies directly; counter hate directly, and then move on.

Racists will attempt to portray themselves as the victim – when they float cliches like “cancel culture,” the “woke mob,” or the “real racist” this is what they are doing. Don’t waste time with these bad faith arguments. It gives them air, which is how they spread.

Making Text from the Facebook Papers More Accessible


I’ve been working on extracting text from the released pdfs of the Facebook Papers. The cleaned pdfs, the extracted text and the code used to clean the text are all available on Github.

Original pdf on the left; processed pdf on the right

The script requires Python 3.6 or higher, and has only been tested on Linux. Enjoy!

The Details

Like many of us, I’ve been following the reporting on internal Facebook documents, and how these documents confirm and reinforce details that have been clear about Facebook for years, and how these documents illustrate exactly how well Facebook knew and didn’t act to solve the problems they created.

Also like many of us, I’ve been dying to see the original docs, so when the team at Gizmodo started releasing the docs I was pretty darn excited.




Seriously, the team at Gizmodo (Shoshana Wodinsky, Dell Cameron, Andrew Couts) have been doing stellar work reporting on these docs, and getting the core docs released publicly.

Due to the provenance of these documents, the “pdfs” released were actually worse than your normal PDF – and that’s saying something, because on the best of days PDFs are where information goes to die. These pdfs appear to be a collection of images taken of a computer screen stitched together into pdfs.

But the information in these pdfs is incredibly valuable, and we are lucky to have it.

Fortunately, from an old side project, I had some dirty, ugly, functional code lying around that cleaned up PDFs. I grabbed some of the early docs released by Gizmodo, did a test run, and lo and behold, it worked. It was ugly, but it worked.

Last night, I reworked my original (dirty, ugly) script into something cleaner, that generates better output. I generally don’t write code, except when I need to, so about the only thing I will ever say about code I write is that it solves a clearly defined problem for me at a point in time — which is a far cry from actually writing code that is good. In this improved version, I had some invaluable help from Smart People Who Know Things (I have asked permission to credit them here; I’ll update this post if/when I receive their consent

The resulting code is now up on Github, along with the text files and the cleaned pdfs. I’m keeping my fingers crossed that I don’t bump into any repository size restrictions on Github anytime soon.

And: if there are any improvements you’d like to make or questions you have, let me know.

Moving Off AT&T

I’ve been meaning to move to a different mobile provider for years, but the story about how AT&T supports – and continues to support – a propaganda network that actively spreads disinformation finally broke through my inertia.

For others who want to move off AT&T and port your number, I want to share one hiccup in the process that I experienced. This documentation assumes that:

  • you are out of contract with AT&T;
  • have unlocked your phone;
  • have a SIM card for your new carrier;
  • you are porting your existing number to your new carrier.

The transfer documentation for many services states that you should swap in your new SIM card before starting the transfer. With AT&T, if you have SIM protection enabled (which you should, and might be enabled by default, which is a good thing), you will need to respond to a text message that asks you to confirm the number transfer.

And, if you have swapped out your AT&T SIM card to your new SIM, you’ll never get the message.

So, if you’re moving from AT&T to another carrier, your sequence should look something like this:

  • verify that you are out of contract, and/or are okay with any financial penalties from switching mid-contract
  • verify that you have an unlocked phone;
  • select a new provider;
  • get a SIM from the new provider, and leave this out of the phone;
  • initiate the number transfer;
  • respond to the text from AT&T confirming the transfer;
  • remove the AT&T SIM card and replace it with the SIM from your new carrier.

Then, do a happy dance because you are no longer supporting a phone carrier that supports propaganda and disinformation!

(and yeah, I know, AT&T owns, well, everything. But their mobile service is terrible and expensive, and every journey is made up of small steps.)

Image Credit:
“Phone, Telefon, Fernsprechapparat” by Dr. Mattias Ripp, released under a CC 2.0 Generic license.

Categorized as ATT, mobile

Mortgage Data, and Working with Large Datasets

Since The Markup reporters Lauren Kirchner and Emmanuel Martinez released their story on bias in mortgage algorithms, I’ve been digging into the data behind their reporting and looking at potential additional patterns. The story is worth a read, and a re-read. They also do a great job showing their work, which includes releasing the code and data they used for their analysis.

Their reporting is based on the 2019 data, but the Consumer Finance Protection Bureau also has 2020 data, so I figured I’d grab that as well.

This is a sizeable dataset, and even though I have a decent workhorse of a machine, loading the datasets made my computer VERY unhappy.

To work around this, I did two things. First, I pulled the code from the Jupiter notebooks into Python, which helped reduce memory usage and CPU load bit, at least in my setup. But this wasn’t enough to process the full dataset without crashing, so I made a temporary increase in the size of my swap directory. I saved this as a bash file so I can run it whenever I need a temporary memory boost to prevent crashes.

I’ve worked with large datasets with tens of millions of records in the past, and I have never needed to do this. Writing to swap files can be very slow in its own right, and if there is a better way to prevent crashes when loading large data sets, I’d love to hear it. As I process data, I am deleting dataframes when I no longer need them, and using gc to free memory, but on my machine loading the datasets caused the crash. I would not recommend using this hack as a permanent solution, or on a machine that is not local.

The commands can be typed out individually, which elimnates the need for a script. But hey – why type out three lines when you can just type out one? This script was used on a Debian flavored Linux system; YMMV if used in other setups.

In the script, you need to set two variables: the location of the swap file, and the size. Make sure that your hard drive has adequate room to support your swap file.

Before you run the script, run sudo free -h from the command line. This will show your default setup, with the amount of free memory on your system, and your default swap setup. After you run the shell script, re-run sudo free -h to see the changes.

When you restart your computer, your system reverts to the default setup.



sudo fallocate -l $SIZE $SWAPDIR
sudo chmod 600 $SWAPDIR
sudo swapon $SWAPDIR

There Is No Such Thing as an “Online Proctoring System”

Image credit:

The act of proctoring an exam in person is pretty straightforward. I have proctored more than my share over the years.

For most exams, the rules and expectations about what is allowed during the exam are established before the exam. The proctor will review them, but they generally aren’t a surprise, and they largely center around the physical space, what additional material is allowed, and other basic, common sense details.

If an in-person proctor required new rules, new checks, and made technical demands that the student needed to meet in real time before taking the exam, that would be abnormal and intrusive.

If a proctor stared into a test takers eyes and tracked where the test taker looked, that would be invasive.

If the proctor demanded that the test taker show them the inside of their backpack at any point, that would be invasive.

If the proctor kept a running tally of every time the test taker looked away from the physical exam, that would be an absurd and meaningless statistic.

If the proctor demanded that the test taker shine a light on their face, sit by the window, turn on more lights in the room, so that the proctor could get a better picture of them, that would be creepy.

If the proctor let the test taker know that they would be sending a report on their behavior to their instructor, and that the instructor might accuse them of cheating, that would not be considered fair or reliable.

In person proctors generally do not do any of these things.

Yet, all of the above, and more, are common “features” of what we currently mis-name as “online proctoring systems.”

Moving forward, we need to call these systems what they are: surveillance tools used during tests.

There is no straight line between the behavior on in-person proctors and the surveillance of “online proctoring systems.” These are different systems, with different impacts, and that needs to be openly acknowledged.

FunnyMonkey gets a technical facelift

Keeping even a simple web site up to date is work, and anything we can do to reduce the time required is a good thing. On this site, I’ve been carrying old posts going back to 2005, which is just plain silly.

In the interest of simplifying things, I made a couple decisions:

  1. All the old posts are archived as flat html; and
  2. is now running on WordPress.

I’ve used WordPress for a range of things over the years, and it’s a solid foundation. I’d be lying if I said I loved it, but I don’t hate it, and it doesn’t fill me with revulsion. In an ideal world, I’d be running something using flat files and markdown, and I’ll probably move in that directtion sooner rather than later, but until then, WordPress is a decent option.