3.2 KiB
LIST_Glossary_of_terms
We all come from different disciplines: words like "campaign" have different meanings to a military, an adtech or a tech person (and if you're all three, you get to fight about definitely with yourself). There are also committees dedicated to defining what words like "disinformation" and "misinformation" mean, and the differences between them.
We ain't got time for that here. This glossary is our latest best effort at definitions for some of the words we use a lot between us, and what we (mostly) think we mean when we say them.
Mission words
-
Cognitive security: The top layer of security, alongside Physical-security and Cyber-security. The art and practice of protecting against hacks that exploit cognitive weaknesses, especially cognitive hacks that are online and/or in large numbers of people. One of the reasons the MisinfoSec crowd started talking about Cognitive Security (including rebranding as the CogSecCollab) in 2020 is a belief that, in order to deal with things like disinformation, we need to focus on the thing we’re protecting. That means working on reducing disinformation, but also on boosting good information when we see it.
-
Misinformation: false content, where that content could be text, images, video, voice etc. Misinformation does not have to be deliberately generated (e.g. my mother might forget my favorite colour)
-
Disinformation: deliberate attempt to deceive online. There is usually intent to deceive with disinformation, and the content itself might be true, but in a deceptive context (e.g. fake users, fake groups, mislabelled images, doctored videos etc).
Clare Wardle's work on the differences between misinformation and disinformation is still some of the best.
Layer words
-
Campaign: Campaigns are long-term efforts to change or confuse populations.
-
Incident: Incidents are inauthentic activity, often “coordinated inauthentic activity” (where the “coordinated” implies either an instigator of some form with motives - geopolitics, money, ideology, attention etc - or some form of collective deliberate behaviour around it, like flooding a hashtag), where that activity usually lasts for a short period of time (the narratives, artefacts etc can be picked up and continued by people who aren’t driving an incident - and this is often part of an incident or campaign’s goals).
-
Narrative: Narratives are the “stories” that are being used to change minds, confuse people etc. Narratives are part of incidents - each incident might have multiple narratives involved, or just one, but there’s usually an identifiable narrative somewhere in there, that you can use to see if there are related incidents already tracked or dealt with etc. The other thing about narratives is that they, like incidents, have lifetimes. Some narratives appear as a result of a world or local event (or upcoming or anticipated event), and are only useful whilst that event is in peoples’ minds.
-
Artefact: Artefacts are the objects that you can 'see' connected to a disinformation incident or campaign. They're the text, images, videos, user accounts, groups, hashtags etc that you use to get a picture of an incident or campaign.