Skip to content

EdTech, Impersonation, and Managing Risk

disguise 2, by Tim Erenata. Shared under a CC-NC license. https://www.flickr.com/photos/tereneta/393487865

Over the last two weeks, two cybersecurity-related stories jumped out at me, and reminded me of a third incident that happened earlier this spring.

Introduction

On Friday, July 19, 2024, the Crowdstrike outage occurred — an incident that is, to date, the largest disruption that the Internet has seen. This outage was caused by a faulty update getting pushed to millions of computers, thus demonstrating one indisputable piece of advice: never push to production on Friday (or late night Thursday).

The second story was an interesting piece shared by the security training and services company KnowBe4. They shared how a person from North Korea, possibly working with North Korean state intelligence, faked their way into a job with KnowBe4. Shortly after getting their KnowBe4 computer, they began to flail about via a series of obviously alarming actions that raised some eyebrows within KnowBe4 tech staff and led to the person being shut down in short order. KnowBe4 has stated multiple times that the person had no access to anything, in any form. In addition to the blog post linked at the beginning of this paragraph, KnowBe4 put out an FAQ that shares additional details.

The KnowBe4 and the Crowdstrike incidents both got me thinking about the nearly catastrophic xz utils incident from late March 2024. In this incident, a person (or, most likely, a group of people) spent years creating a false persona that ultimately become a maintainer on an open source project. When this story broke, there was a lot of handwringing about open source and vulnerability to supply chain attacks, but as both the Crowdstrike incident and the KnowBe4 incident show, the license of the code or service doesn’t do much to ensure security.

What These Events Show

Looking at these three incidents together, we can see:

  1. attackers continue to look for creative/effective/lucrative vectors to exploit;
  2. single points of failure can be leveraged to catastrophic effect; and
  3. impersonation is currently used by nation state actors and other criminals as a tool to launch what are effectively insider threat attacks.

None of this is new, but the ease and sophistication with which people can misrepresent themselves as part of an attack is an escalation. When we pair the elevated risk that comes from insider threat via impersonation with the risk of key systems being leveraged for an attack, the outlines of a new level of attack begins to emerge.

While generative AI fails at tasks requiring basic competence and accuracy, generative AI is proving itself to be a fantastic bullshit and scam machine. Deepfake video is now easier to create, disinformation is easier to create, and grammatically precise English can be spewed endlessly. We should expect attacks using fake personas to flourish in the near future.

While disinformation, misinformation, romance scams, and other online scams and fraud have repeatedly taught us that the people we meet online might not be who they claim, more attackers now have ready access to more sophisticated tools. The KnowBe4 and xz utils attack show that nation state actors and professional criminals will put in the time for the right payoff. These two attacks — happening in close proximity to one another — provide concrete examples of how the threats materialize in the real world, about how attacks can be chained together, and about what the impacts look like.

And, it’s worth noting: the impacts of the Crowdstrike incident could have been significantly worse if the faulty update was part of an insider threat attack, rather than just a mistake.

The Attacks Will Continue Until Morale Improves — Looking at You, Education

We should assume that the KnowBe4 impersonation and the xz incident are not isolated or unique, and that there are other similar attacks underway that are having varying degrees of success, or are currently in the process of working. We should also assume that the people attempting to compromise systems are professionals, have both skill and time, have done research to identify both useful and accessible targets, and are working multiple angles in parallel.

We should also assume that an attack can both be successful in its own right, and function as a doorway to the next attack.

Thinking about this in the context of education here in the US, which has been hit by hundreds of successful ransomware attacks (not to mention some self-inflicted wounds via AI), it’s worth thinking about other large systems that have access to either large numbers of students, core features on mission-critical services, or both.

This matters for many reasons, but for reasons of brevity I’ll keep this initial list at two items:

  1. Schools need to function for our society to exist in its current form.
  2. Schools are filled with kids whose parents have jobs in key industries (is your school near a military base, an FBI field office, a pharmaceutical company, an aerospace or defense contractor, a state capitol, etc? You have kids in your school whose parents access useful information every day).

If nation state actors and criminals can get compromised devices or services into the “right” homes, their lives just got easier. Related: kids are wonderful, and they are absolutely an operational security risk.

The “security” and “safety” tools that are sold to schools are an obvious class of targets. This class of product is effectively surveillance tech. Like anti-virus and anti-malware software, it requires a high level of access to both machines and data to run. Specifically, I’m thinking of “safety and security” products like the ones sold by Gaggle, Securly, GoGuardian, Navigate360, and Lightspeed Systems. Because these products effectively create a scenario where a device is “compromised” in the name of security and safety, an effective attack theoretically wouldn’t even need any malware, although malware would also be effective. Compromising these systems could theoretically just require an insider willing to provide access to running systems. And yes – criminals are more than willing to bribe employees to gain access. Just ask Sydney Sweeney.

Another class of product used in schools that would be susceptible includes systems from organizations like the College Board and Naviance. Millions of students across thousands of schools are required to use these platforms every year, and this makes them a very attractive target.

It’s not clear — and it will absolutely vary from district to district, and from school to school — how much scrutiny trusted and vetted systems receive once they are deployed. But if we are curious what a successful attack of  even a moderately used system would look like in education, we can look at the scores of schools impacted by Illuminate’s data breach in 2022.

What To Do

Schools and districts: to get a sense of potential exposure, ask your vendors these questions. These questions are not comprehensive, and they will likely start a longer conversation, but they are a start.

  • How do they audit — and how often do they audit — third party code and dependencies in their software? This includes any and all libraries, SDKs, analytics tools, etc.
  • How do they monitor and protect against insider threat?
  • How do they test and verify updates? What is their rollback process if and when a bad update gets released?
  • How do they document and share successes and failures with these processes in a safe and transparent way?

Vendors don’t need to wait to be asked these questions — they can start answering them proactively. If a vendor wanted to lead in this space, they could start by setting an example and creating a roadmap for others to follow.

The way through the potential security gaps that currently exist in our edtech ecosystem require us to rethink how we define, give, and maintain trust. Trusting systems is not the same as trusting people. When we work with systems we need to move towards a practical, time based and constrained version of trust. I can trust a system now, but that trust doesn’t extend indefinitely. Software platforms are like flaky friends or unreliable relatives: it’s generally best to assume that something dodgy will happen, and have a plan in advance to minimize the negative impact.

With this working definition in hand, the most trustworthy systems will be to ones that make it easy to verify their claims, and easy to disconnect and move if and when we need to.


Image credit: disguise 2, by Tim Erenata. Shared under a CC-NC license. https://www.flickr.com/photos/tereneta/393487865