Fighting Misinformation: Debunk and Disengage

tl;dr

Disproving every lie that is embedded in misinformation is time consuming and exhausting, and prevents us from talking about the things that matter to us. Rather than disprove every element of misinformation, note the general inaccuracy of the misinformation, and move on.

The Details

When it comes to addressing misinformation, I point people to the Four Moves, also known as the SIFT method. SIFT is great because it’s concrete, fast, and efficient.

However, the lies often come bundled and interwoven, like yarn balls mixed together by angry cats. While it’s possible to detangle the individual lies, it’s also time consuming and exhausting.

And this gets to one key question that often gets overlooked while fighting misinformation and delivering trainings to people about misinformation (and to be clear, this is not a shortcoming of SIFT, at all): correcting misinformation is an ongoing — and likely a neverending — process.

Refuting every detail means elevating every detail, which gives the details more airplay.

Fact-checking — even using SIFT — takes more time than lying.

If you fact check every lie, you probably won’t have time for anything else. This is especially true when the current lies build on past lies, and doing a thorough job of debunking the current lie requires a debunking of the context that supports the lie.

It’s laborious, it’s tiresome, and — from a practical perspective — it can derail you from having the conversations you want and need to have. The time pressure is true both in personal and professional contexts. Understanding misinformation in this way can help us see that misinformation is more than just lies and conspiracies. Misinformation is a distributed denial of service attack on rational thought.

This unfortunate reality plays out in the ongoing mess that is most commonly called “anti-Critical Race Theory” disruptions targeting school boards. The reality of this mess is that the conspiracies pushed by people are a mash of anti-vax meets anti-mask meets anti-LGBTQIA rights meets parental rights meets anti-“CRT” meets their perverted version of student privacy — and that many of these conspiracies are rooted in longstanding conspiracy theories about government takeovers (because remember — to many conspiracists, “public schools” are “government schools”), globalists (and the frequent companion lie of “UN Camps“), and other lies that have been festering for years, or decades. QAnon tapped into these theories, often with help from elected officials.

Tweet from Greg Abbot in 2015 legitimizing conspiracies about a preparedness exercise.

So, when confronting pernicious misinformation rooted and tangled in conspiracy narratives, how do we debunk the lies, maintain our sanity, and protect our energy, knowing that we will likely need to address more misinformation in the very near future?

Debunk and Disengage

Time and energy are valuable resources. Be intentional when you spend them. When we dedicate time to debunking misinformation, we are not dedicating time to other things. In particular, when misinformation impacts a topic we care about, addressing all the lies is tempting. Avoid the temptation.

When we look at fighting misinformation as something that we need to do on a regular basis over time, we can shift our focus in ways that preserve our time and energy. Eliminating misinformation will never happen — conspiracies existed before the internet, and the connectivity and amplification provided by the internet ensures that misinformation will be with us for the foreseeable future. YouTube alone hosts a reservoir of conspiracy theories going back years.

Given that misinformation is probably going to be here for a while, one goal of mitigating misinformation can be to blunt its impact, rather than disprove it outright. This means we don’t look at misinformation through a True/False binary — this lens helps people spreading misinformation more than it does those of us who prefer reality because the act of debunking misinformation can elevate the lie and prevent us from talking about other things (ie, misinformation as DDOS). In some cases, we can be effective with a quick debunking, followed by disengagement. This can minimize the impact on our time and energy, and the effectiveness of the lies embedded in the misinformation.

When addressing misinformation, debunk and disengage. Save your time. Save your energy.

On Dogwhistles, or Putting a Pin in This

Free speech is not consequence-free speech.

If you say hateful things and people respond to the hateful things you are saying, you are not being criticized for “speaking your mind.” You are being criticized for saying hateful things.

Image from https://www.acmewhistles.co.uk/acme-silent-dog-whistle-535

Racism, homophobia, transphobia, anti-Semitism, Islamophobia, xenophobia, and other forms of bigotry are hateful.

If you say hateful things and people ostracize you socially, and/or you have professional consequences, it’s not censorship. People have a right to not like what you say.

If you feel that “wokeness” or “cancel culture” are actual problems on the same level as racism, you should probably learn more about racism.

It’s not unkind or unfair to point out that a person has said hateful things publicly. Hateful things cause strong reactions.

If you are called out for saying hateful things and your response is that you were “asking questions” or “starting a conversation”, please understand that your desire to ask questions can lead to participants in the conversation you are starting to call you out for giving voice to hateful things.

If you are called out for saying hateful things and your response is that you were “only joking”, please understand that your desire to “make jokes” can lead people to call you out for giving voice to hateful things.

If, when you are “speaking your mind”, you are saying things that members of your audience find hateful, it might be time to examine the contents of your mind.

If people call you out for saying hateful things and your response is that you were “following your conscience,” you should probably examine the contents of your conscience.

When people attempt to tone police how people respond, they are showing themselves to be bad faith actors. Respond accordingly, preferably by not wasting time with them. Counter lies directly; counter hate directly, and then move on.

Racists will attempt to portray themselves as the victim – when they float cliches like “cancel culture,” the “woke mob,” or the “real racist” this is what they are doing. Don’t waste time with these bad faith arguments. It gives them air, which is how they spread.

Making Text from the Facebook Papers More Accessible

tl;dr

I’ve been working on extracting text from the released pdfs of the Facebook Papers. The cleaned pdfs, the extracted text and the code used to clean the text are all available on Github.

Original pdf on the left; processed pdf on the right

The script requires Python 3.6 or higher, and has only been tested on Linux. Enjoy!

The Details

Like many of us, I’ve been following the reporting on internal Facebook documents, and how these documents confirm and reinforce details that have been clear about Facebook for years, and how these documents illustrate exactly how well Facebook knew and didn’t act to solve the problems they created.

Also like many of us, I’ve been dying to see the original docs, so when the team at Gizmodo started releasing the docs I was pretty darn excited.

Pretty.

Darn.

Excited.

Seriously, the team at Gizmodo (Shoshana Wodinsky, Dell Cameron, Andrew Couts) have been doing stellar work reporting on these docs, and getting the core docs released publicly.

Due to the provenance of these documents, the “pdfs” released were actually worse than your normal PDF – and that’s saying something, because on the best of days PDFs are where information goes to die. These pdfs appear to be a collection of images taken of a computer screen stitched together into pdfs.

But the information in these pdfs is incredibly valuable, and we are lucky to have it.

Fortunately, from an old side project, I had some dirty, ugly, functional code lying around that cleaned up PDFs. I grabbed some of the early docs released by Gizmodo, did a test run, and lo and behold, it worked. It was ugly, but it worked.

Last night, I reworked my original (dirty, ugly) script into something cleaner, that generates better output. I generally don’t write code, except when I need to, so about the only thing I will ever say about code I write is that it solves a clearly defined problem for me at a point in time — which is a far cry from actually writing code that is good. In this improved version, I had some invaluable help from Smart People Who Know Things (I have asked permission to credit them here; I’ll update this post if/when I receive their consent

The resulting code is now up on Github, along with the text files and the cleaned pdfs. I’m keeping my fingers crossed that I don’t bump into any repository size restrictions on Github anytime soon.

And: if there are any improvements you’d like to make or questions you have, let me know.

Moving Off AT&T

I’ve been meaning to move to a different mobile provider for years, but the story about how AT&T supports – and continues to support – a propaganda network that actively spreads disinformation finally broke through my inertia.

For others who want to move off AT&T and port your number, I want to share one hiccup in the process that I experienced. This documentation assumes that:

  • you are out of contract with AT&T;
  • have unlocked your phone;
  • have a SIM card for your new carrier;
  • you are porting your existing number to your new carrier.

The transfer documentation for many services states that you should swap in your new SIM card before starting the transfer. With AT&T, if you have SIM protection enabled (which you should, and might be enabled by default, which is a good thing), you will need to respond to a text message that asks you to confirm the number transfer.

And, if you have swapped out your AT&T SIM card to your new SIM, you’ll never get the message.

So, if you’re moving from AT&T to another carrier, your sequence should look something like this:

  • verify that you are out of contract, and/or are okay with any financial penalties from switching mid-contract
  • verify that you have an unlocked phone;
  • select a new provider;
  • get a SIM from the new provider, and leave this out of the phone;
  • initiate the number transfer;
  • respond to the text from AT&T confirming the transfer;
  • remove the AT&T SIM card and replace it with the SIM from your new carrier.

Then, do a happy dance because you are no longer supporting a phone carrier that supports propaganda and disinformation!

(and yeah, I know, AT&T owns, well, everything. But their mobile service is terrible and expensive, and every journey is made up of small steps.)

Image Credit:
“Phone, Telefon, Fernsprechapparat” by Dr. Mattias Ripp, released under a CC 2.0 Generic license.

Published
Categorized as ATT, mobile

Mortgage Data, and Working with Large Datasets

Since The Markup reporters Lauren Kirchner and Emmanuel Martinez released their story on bias in mortgage algorithms, I’ve been digging into the data behind their reporting and looking at potential additional patterns. The story is worth a read, and a re-read. They also do a great job showing their work, which includes releasing the code and data they used for their analysis.

Their reporting is based on the 2019 data, but the Consumer Finance Protection Bureau also has 2020 data, so I figured I’d grab that as well.

This is a sizeable dataset, and even though I have a decent workhorse of a machine, loading the datasets made my computer VERY unhappy.

To work around this, I did two things. First, I pulled the code from the Jupiter notebooks into Python, which helped reduce memory usage and CPU load bit, at least in my setup. But this wasn’t enough to process the full dataset without crashing, so I made a temporary increase in the size of my swap directory. I saved this as a bash file so I can run it whenever I need a temporary memory boost to prevent crashes.

I’ve worked with large datasets with tens of millions of records in the past, and I have never needed to do this. Writing to swap files can be very slow in its own right, and if there is a better way to prevent crashes when loading large data sets, I’d love to hear it. As I process data, I am deleting dataframes when I no longer need them, and using gc to free memory, but on my machine loading the datasets caused the crash. I would not recommend using this hack as a permanent solution, or on a machine that is not local.

The commands can be typed out individually, which elimnates the need for a script. But hey – why type out three lines when you can just type out one? This script was used on a Debian flavored Linux system; YMMV if used in other setups.

In the script, you need to set two variables: the location of the swap file, and the size. Make sure that your hard drive has adequate room to support your swap file.

Before you run the script, run sudo free -h from the command line. This will show your default setup, with the amount of free memory on your system, and your default swap setup. After you run the shell script, re-run sudo free -h to see the changes.

When you restart your computer, your system reverts to the default setup.

#!/bin/bash

SWAPDIR=/swapfile

sudo fallocate -l $SIZE $SWAPDIR
sudo chmod 600 $SWAPDIR
sudo swapon $SWAPDIR

There Is No Such Thing as an “Online Proctoring System”

Image credit: https://www.pinterest.com/pin/532058143475158623/

The act of proctoring an exam in person is pretty straightforward. I have proctored more than my share over the years.

For most exams, the rules and expectations about what is allowed during the exam are established before the exam. The proctor will review them, but they generally aren’t a surprise, and they largely center around the physical space, what additional material is allowed, and other basic, common sense details.

If an in-person proctor required new rules, new checks, and made technical demands that the student needed to meet in real time before taking the exam, that would be abnormal and intrusive.

If a proctor stared into a test takers eyes and tracked where the test taker looked, that would be invasive.

If the proctor demanded that the test taker show them the inside of their backpack at any point, that would be invasive.

If the proctor kept a running tally of every time the test taker looked away from the physical exam, that would be an absurd and meaningless statistic.

If the proctor demanded that the test taker shine a light on their face, sit by the window, turn on more lights in the room, so that the proctor could get a better picture of them, that would be creepy.

If the proctor let the test taker know that they would be sending a report on their behavior to their instructor, and that the instructor might accuse them of cheating, that would not be considered fair or reliable.

In person proctors generally do not do any of these things.

Yet, all of the above, and more, are common “features” of what we currently mis-name as “online proctoring systems.”

Moving forward, we need to call these systems what they are: surveillance tools used during tests.

There is no straight line between the behavior on in-person proctors and the surveillance of “online proctoring systems.” These are different systems, with different impacts, and that needs to be openly acknowledged.

FunnyMonkey gets a technical facelift

Keeping even a simple web site up to date is work, and anything we can do to reduce the time required is a good thing. On this site, I’ve been carrying old posts going back to 2005, which is just plain silly.

In the interest of simplifying things, I made a couple decisions:

  1. All the old posts are archived as flat html; and
  2. FunnyMonkey.com is now running on WordPress.

I’ve used WordPress for a range of things over the years, and it’s a solid foundation. I’d be lying if I said I loved it, but I don’t hate it, and it doesn’t fill me with revulsion. In an ideal world, I’d be running something using flat files and markdown, and I’ll probably move in that directtion sooner rather than later, but until then, WordPress is a decent option.