Watch the VOX-Pol Webinar on ‘Terrorism Informatics’

In March 2021, VOX-Pol published four Blog posts on the topic of ‘Terrorism Informatics’. The series covers A Framework for Researchers, Identifying Extremist NetworksAnalysing Extremist Content, and Predicting Extremist Behaviour. The Blogs are written by Matti Pohjonen, a Researcher for a Finnish Academy-funded project on Digital Media Platforms and Social Accountability (MAPS) and VOX-Pol Fellow.

Following the four Blog posts, VOX-Pol hosted a Webinar, where blog author Matti Pohjonen and Eugenia Siapera, UCD Professor of Information and Communication Studies, discussed the topic of ‘Terrorism Informatics’ and how researchers can ethically engage in these kinds of analyses.

Prof Siapera posed questions about the values that drive the research, potential impacts on civil liberties, methodological concerns, and possible future directions for this research. Professor Maura Conway, VOX-Pol Coordinator, introduced the discussion and led the Q&A.

The webinar can be watched on the VOX-Pol YouTube channel or below.

The full transcript is below.

Maura Conway (00:00:00)

I’m Maura Conway. I’m Paddy Moriarty Professor of Government and International Studies in the School of Law and Government, Dublin City University. And I’m the coordinator of VOX-Pol. 

All I really want to do this evening is introduce you to our two guests, and then I’m going to hand over to them. The Master of Ceremonies is Eugenia Siapera. Many thanks Eugenia for being with us. Eugenia is Professor of Information and Communication Studies, and Head of the School of Information and Communication Studies at University College Dublin. Her research interests are broadly in digital and social media, political communication, and also journalism. And for our purposes this evening, she has done a wide range of research, but in particular, she has worked on racist hate speech in the Irish digital sphere, and is the co-editor of the 2019 book, Gender Hate Online, together with Debbie Ging from DCU.

And in the hot seat is Matti Pohjonen. Matti received his MA and PhD from the School of Oriental and African Studies at the University of London, works at the intersections of digital anthropology, philosophy, and data science. So he’s a little bit of a unicorn. For the past 10 years, he’s developed critical comparative research approaches for understanding digital cultures around the world. So this has included, among other things, work on international news and blogging in India, mobile technology in East Africa, and comparative research on online extremism and hate speech in Ethiopia and in Europe. And he’s also spent quite a bit of time exploring new methods and so-called big data analysis and artificial intelligence for digital media research. Matti has been involved with VOX-Pol since 2015, and he’s here this afternoon to discuss his recent series of blog posts for us on ‘Terrorism Informatics’. And I think what these usefully do is describe, but they also problematize, a wide range of computational approaches, methods and tools that are in current and I guess, also in increasing use, and online extremism and terrorism research. 

So just for some very brief housekeeping, Eugenia and Matti are going to have a bit of back and forth about the blog posts, but what we invite all of you to do is to put your questions in the chat as we progress, and Eugenia will address some of those to Matti over the course of the session. So basically, that’s all I’ve got for my end. I want to thank you, Eugenia and Matti for being here, and I’m going to hand over to them.

Matti Pohjonen (00:03:00)

Thanks Maura, maybe what I will do first, I will start off by providing a bit of context and background to the blog posts, as a way of framing where we are coming in the discussion here. And I’m also looking forward to a very lively and great discussion… even if we tend to be staring into the abyss of Zoom oftentimes, and I do thank all the participants who are there hidden behind the screen that they managed to come in big numbers to participate. And also thanks to Eugenia for probably being the best interlocutor that I could have imagined for such a conversation. I’m also hoping we can keep this quite organic and relatively relaxed, despite the fact that it’s happening online. And thanks to Maura for all the work that she has been doing and inviting me.

So very briefly, I don’t want to take too much time from the actual discussion, I will provide a bit of context into the blog posts, and where the idea originated, and how I came into the discussion. As the blog post said, and I think as Maura might have introduced the beginning, these are based on the workshop that we organized, and I co-organized for VOX-Pol in 2018, called ‘Terrorism Informatics’: a cost-benefit analysis of using computational techniques in violent and online and political extremism research. And so in these workshops, we debated and discussed for a number of days about how do we look at different perspectives to what is now increasingly called computational or big data methods or even more general term, artificial intelligence. At that point we had people who were social scientists and computer scientists around the table, and a lot of the ideas in a way originated, and stemmed, and found a forum through the conversations in the workshops. 

The question may become why that never ended up becoming published, or why it ended up being blog posts. I think one of the challenges that I faced, and having a very hectic point in my life, was that there’s been a massive transition or a change in terms of how we frame the issues. And I will try to talk about this today as the discussion proceeds. 

The idea of ‘Terrorism Informatics’ in the last three years alone, there has been so much development, so much change that has been going on, in to all kinds of debates. We don’t really talk about, very narrowly, about the debate in terms of ‘Terrorism Informatics’, as much as we now talk about more broadly about things like computational social science, we talk about now artificial intelligence, computational methods. All these things have been developing and evolving very quickly, so every time I started writing the final draft, or the final conversion of the published version, I found out that I had to rewrite the whole thing. It has been a kind of interesting progress of things evolving and changing very quickly. 

And so we thought in the end, let’s start off with a number of blog posts, and we can use this as a way of bringing the discussions to the public arena, but at the same time, potentially, to take the subject forward, and further update them. This is the spirit or background from which four blog posts, that some of you may have been reading on the VOX-Pol page, emerged out of. 

I ended up writing about double or more than what I was expected to write, even within this context, and I had trouble summarizing them down. So as a kind of a caveat, we also need to take into account that there are limitations in blog posts of how much can be incorporated there. There is a lot of things that were omitted and missing and, again, is something that we can probably augment and bring into discussions here. 

I think one interesting perspective or relevant perspective, before we go into discussion, is that where I come from, and as Maura mentioned the unicorn perspective – I’m not sure about that – but I do come into debates on digital methods or computational methods as someone of a different disciplinary tradition. By no means I consider myself a computer computational scientist, but I’m actually an anthropologist by training, and I will talk about this throughout the comments and dialogue. 

I come from these debates increasingly, from what I call a kind of a theoretical bifocal perspective. And this is a reference to Derrida talking about the idea of philosophy as something that you have to operate both within but also within the outside of the tradition, because you can’t really ever step outside these traditions. So in a way, I also look at this both as a practitioner, as well as the philosophical or more theoretical implications of these methods. I have learned, to a relative degree of confidence, how to use some of these methods, I have been using them in my work. 

I’m also currently working as a kind of a creative AI fine artist, I do a lot of work on visuals on my side life, and I also do a lot of work on exploring the very practical implications of these things. I hope to be able to somehow, in my own head, hopefully communicating to both Eugenia and you guys as well, that there is this kind of schizophrenic bifocal double perspective that I bring in, where I try to use these methods, but I also investigate their more theoretical underpinnings while working with them, and as a way of bringing these both perspectives. And so this way, more broadly, I think we can then start looking at both questions about how we use them, but also the broader social, political implications and issues that Eugenia specialises in and knows probably much more than I do.

Just before handing it over to Eugenia, I think there were two kinds of takeaways that I think, obviously there was a lot of material in the blog posts, and these things were trying to address a lot of complicated debates, complicated research, a lot of diverse heterogeneous research in a relatively summarised way. I think there’s two kinds of points or takeaways that we can start highlighting from the blog posts. 

I think one of the highlights was that once you go into looking at some of the implications of using computational methods, and that was in the first blog post, we are not always dealing only about questions of technological validity, or methodological validity. And also, there are issues that transcend them, that we cannot determine the use of these methods solely by looking at the technological qualities or characteristics of them alone. The discussion of false positives and false negatives that I started off with was a way of saying that, in these systems, there is always a tension or a contradiction on how we design them, which has to ultimately involve normative and ethical constraints, or issues. That is something that I think should be highlighted and we can discuss it a bit more. 

And the second point is that, which was towards the end of the blog posts, another idea that I have been working with, is that a lot of the times we assume that these systems can become better, incrementally, step by step, from 80% to 85%, as long as we improve the technologies and algorithms used. However, we don’t often consider, what is it that we are improving from? And so also to bring in this discussion of what we call the ground truth in a lot of technological methods, and try to also understand it. It’s perhaps more social and political and broader kind of discussions, that is what we are talking about here, and what are the other issues that come along there. 

And also how that might bring in other debates around AI ethics and bias, and other things that have become very popular and much discussed today. In a way there’s a lot more that I could add here, but I think I will leave it to Eugenia. And then we can talk of these, which are very interesting but also very difficult questions, in a kind of interesting dialogue form. So thanks for coming and I’m looking forward to hearing questions from you guys as well.

Eugenia Siapera (00:11:00)

Thank you very much, Matti. Thank you very much, Maura, for the introduction. I’m delighted to be here. I’m delighted to participate in this kind of discussion. I too, come from a social science background. And in this discussion we’ll probe issues that I suppose pertain to some of the less technical questions of these methodologies, but I think that they’re very valuable for us to probe. And I suppose like, we’re not looking for concrete answers here, but of course, we would be happy if we could provide answers. But the point here is to open up discussions about these issues and these methods in particular, about perhaps more specifically, about the prominence that they are given in recent paradigms within social sciences and beyond. And if they are really worth the prominence and hype that surrounds it. And also in this way to put them back into their proper place within broader systems of knowledge and epistemologies. 

So, to bring this discussion back to the actual blogs and the topics that you raised, Matti, I suppose –you also alluded to this in your introduction, that methods are never only about the methods, they’re also ways of knowing the world. And as such, they’re underpinned by values that are either overt, either apparent to people, or somehow hidden and not very accessible. So I wonder if you could perhaps explain a little bit what are the values that underpin informatics and computational methodologies, especially when it comes to extremism research?

Matti Pohjonen (00:13:00)

So, very excellent question, Eugenia, and also a very difficult question. I will be a kind of step-by-step trying to respond to this. And I think that one of the major challenges is that it’s very hard to summarise such a diverse field of research in terms of a single set of values, or a single set of issues. And I will try to allude this talk about this a bit more in the answer as I develop it. 

In a way, what is interesting, just going back to the name that we have been discussing, ‘Terrorism Informatics’ was very popular about 10 years ago. And so gradually this term – for the workshops was still called that – but gradually this term has been shifting, as I mentioned in the introduction, more towards looking at first digital methods, computational methods, and increasingly, AI. There’s an interesting, on the one hand, shift of terminology that has been used once we go into extremism research, but on the one hand, there has been a kind of a popularisation of the approaches and methods that we generally have been using in extremism research. And I’ll try to talk about that a bit more, because it is very much linked to where I think the idea of values or the idea of how we understand these systems in the more broader context operates. 

As somebody might say that I’m an old person in the so-called extremism field. I think extremism research, and the methods that have been used, function as a kind of a canary in the coal mine, they have been exploring a lot of issues that transcend the pure, simple research interests for the past five years. If you look at some of the research, debates around ISIS, freedom of speech, content moderation, all of these things, in a way, are now becoming increasingly mainstream. When I was going through this, thinking about how do we understand this idea of values, you may have heard me discuss this somewhere else, but I’m increasingly starting to see this idea of methods and the approaches we use and the values that underlie it as a kind of what I call – I’ll try to use a technical term, which I’ll explain a bit better, while I go step-by-step in the answer – so what I call is a problem-solution assemblage. And by that, I mean there are certain ways that we frame a problem, which also brings in certain sets of solutions, as well as the methodologies that enable us to address the solutions. 

There’s an interesting double development – I’m old enough, I’ve been working with digital culture since, probably the early days of blogging, pre-Twitter, until mobile phones and all this stuff – there’s an interesting double development that we have seen happening. And this links both the computational methods, but it also links to broader, this transition from ‘Terrorism Informatics’, to extremism, to kind of a mainstream-isation of certain methods being used. 

This is 10 years ago – maybe Arab Spring might be a good sounding board – there are certain histories around this, what we call extremism research or digital methods partially was more focused on digital activism. People have been doing research, a lot of it was around how do you make digital activism more effective, democratic. It was only when the so-called Arab – I like the term Arab Winter – started coming, through this popularisation of online jihadism, and all these things that happened and the debate around that, and now increasingly with Trump, and Cambridge Analytica, and far-right extremism, anti-refugee hate speech, we have been seeing this kind of change towards a dystopian format of how we understand digital culture. And in a way, I think extremism research has been a canary in the coal mine in that research. 

We see these two double movements that have been going in parallel, if you look at this idea of the problem and the solution. One of them is that we have seen the emergence of big data as – partially because availability of social media data, digital traces, and all kinds of methodology that has been emerging as a part of that – we see all kinds of new approaches have been developed and being implemented. For somebody like myself, there is a certain feeling of magic the first time you download half a million or a million Facebook posts, and you can perform analysis on them. There is a lot of interest and excitement around what these methods can do and their improvement over time, but at the same time that has been kind of conjoined or parallel by this stage towards a more dystopian discourse. 

If you look at these values, around how we understand digital and computational methods, I think part of it also has to do with the types of problems that emerged out of trying to understand the more dark or negative sides of communication. And so these values that colour or influence the research questions and the way these methods are being used, are increasingly valued, or increasingly influenced, by the slightly more dystopian understanding. 

If you take a practical example, once we deal with trying to understand the problems around digital communications, oftentimes the solutions also are, to a degree, prescriptive. What this means is that we are dealing with something that we are also looking for a solution to, and this also leads to a certain types of questions and answers. Not that they’re necessarily bad, not that they’re wrong answers or wrong ways of approaching it, but that is not the only way that we can potentially look at the situation. 

Once I was going through this idea of how do we understand this, rather than going into specifics of extremism research, I think one interesting development that has been going on has been a certain kind of generalisation of an ‘extremism research mindset’ to other fields of digital research as well. And we see this now. It’s part of the content moderation debates, certain methods are being used, which is based on identifying, detecting certain types of content and potentially, with the side effect of removing them. The methods work really well for that. And they also come with a certain other set of problems. I think that’s kind of roundabout way for answering where you come from, I think we can then go into the specifics of this.

Eugenia Siapera (00:19:30)

Yeah, absolutely, that was fascinating. I’m very interested in this idea of the problem-solution assemblage, and perhaps, maybe splitting off of this, I wonder, you made the argument that a lot of this kind of research is driven by the different sets of social problems that are happening within a specific historical and political context. But I wonder to what extent is this the case? 

Also, I think that a lot of this research is more-like driven by the solutions available, if you know what I mean. A lot of this research may be actually driven by the availability of the methods rather than using the methods to address social problems out there. So this is what my next question then would be, what is it exactly that these particular methods contribute to theory building in terms of extremism research? And also what kind of specific research questions do they lend themselves to? 

I think this is very interesting, because we look at it from the point of view of, OK, we have this arsenal of new methods, but what are they there for? What kind of research questions do they lend themselves to, because they obviously can’t do everything?

And also, with respect to this particular question, to what extent can they contribute to actual field building instead of just describing what is out there? I think this is a perennial problem of what was called earlier, big data research. 

And also, as an aside, you mentioned that you downloaded half a million Facebook posts, are you still able to do this?

Matti Pohjonen (00:21:00)

So we start off with the easy question: pre-2015, yes. That was before Cambridge Analytica. 

These are very difficult questions, in terms of theory building and methods. But I think one of the points that I was also trying to stress that, as we know, it’s linked to the availability of certain types of data that we work with. 

I’ll take an example: we were working on online hate speech in Ethiopia when the country had about 5% Internet penetration rates. And so we went to Oxford, this very kind of formal Oxford African Studies discussion to present our results, we couldn’t get the projector to work. And we showed our laptop, fancy graphs and kind of data stuff to an audience that wasn’t as familiar with that kind of work. One of the questions that was asked was why do you look at the social media for this purpose? Why don’t you look at something else, because we are in a country where it’s very urban middle class, and it’s a very small problem that you’re dealing with. My response was that if I had sufficient funding, I would rather get, let’s say, 100 anthropologists and send them to all different parts of place and have them work for 18 months, trying to figure out the very nuanced granular details of what is going on. But we don’t have that kind of funding, and nobody will give that kind of funding for doing very thorough classical research. So as the second best option, we can then start looking at public online conversations. And so in a way once we start going into a lot of these methods, because they are available. 

There is this new possibility that has been provided by this explosion of digital traces online, where certain types of questions and answers can be approached. And there are potentially new questions that could not have been approached in the classical sense. I think a lot of it has to do with that, there are serious new questions that can be answered and certainly interpretive frameworks that can be derived from them. 

But at the same time, as you say, whether the research is being then driven by the discovery and excitement, it’s something that I think the blog posts in a way… what I’m trying to do is that, if you start demystifying what is involved in different types of so-called computational methods, they are very valuable for certain purposes, but for certain other things they might not be as useful, and they don’t replace the older theoretical frameworks and the hype or excitement that existed when Big Data became a keyword, such as that we don’t need theory anymore. This is the proclamation that theory is dead, we have all the models, and we can just use big data. In fact, the last five years have proved the contrary that what we need is more theoretical and critical models. And we need more theoretical and critical discussions, so that these methods are not being used purely because they are there, but rather as significant parts of theory building. 

Some of the more practical kind of suggestions that I give for, let’s say, social network analysis in the blog posts, was that we probably don’t know much about what goes behind these metrics unless we go into the theoretical frameworks, or we place them into some kind of more qualitative or more contextual framework. 

And so, yes, I fully agree, I think one, we should not be driven by the excitement of the methods, unless we’re working on some kind of exciting creative AI artwork, which is slightly different. But at the same time, we have to bring in this kind of negotiation between the theoretical frameworks and the limitations, as well as what is available and possible with certain types of methods. 

And that’s the kind of work that I’ve been also trying to highlight and foreground. And again, it comes down to the way we frame the problem, or the way we frame it theoretically, also partially enables and restricts the types of methods and solutions or findings that we can get from that. But yeah, I think you’re completely correct.

Eugenia Siapera (00:25:15)

So I suppose maybe this, in some respects, is also part of the more broader issue of research into extremism, that it’s not necessarily driven by theoretical considerations, but by very real, practical aspects, especially if you consider that people may be harmed. So I suppose this may also be a factor here. 

But I wonder then, even if we assume that these methods are linked to theoretical frameworks, that they are kind of like driven by specific research questions about, you know, reality out there. To what extent are they fit for purpose? In other words, are they actually capable of addressing these research questions, and then providing answers that we can rely upon?

Matti Pohjonen (00:26:10)

Again, I think there’s such a diversity of different types of methods. When we are dealing with computational methods, I think we can start differentiating already at this point – we can talk about this a bit more later – between what is called analysis, what is called detection, and what is called prediction. And each of them come up in certain clusters of using methods, and certain assumptions in them. 

So analysis potentially, if you use computational methods, you’re not making broader claims about certain types of social processes, you’re analysing what data is there, what are the patterns with it.  You do social network analysis, you analyse how the communities are, what are the kind of boundaries with them, there’s certain statistics. These are, relatively, not as difficult to conceptually look at. So a lot of analysis is relatively sophisticated in terms of the data available. And we can talk about data a bit more later. 

But then once you move on from analysis, I think increasingly, we are seeing a move towards what is called detection and prediction. So detection involves looking at large datasets and trying to find an example of let’s say, X and Y. We’re trying to find an example of, let’s say, ISIS supporters, and we’re trying to detect who they are by reading through big datasets. Or we are trying to find examples of foreign supporters, or we’re trying to find examples of hate speech. So we are going then into this idea that we are trying to identify certain things. And even more, we’re trying to start classifying social processes based on the detection that we find. And so at that point, once we start going into what do we do with the detections? What I talk about the blog post that we are going into, again, largely complicated theoretical terrain, because every detection as we know systems are not foolproof. Every detection with itself, it brings in the possibility of getting things wrong. We have to start thinking about what is the balance that we want to do in terms of getting things wrong and right. I use this example of which is very common in medical research, which looks at false positives and false negatives. That in fact, when we are looking at getting things wrong, we are getting things wrong in significantly different types of ways. And each of them might have broader social and political costs that we need to also bring into the framework.  

Let’s take an example. We want to identify a neo-Nazi supporter who is about to commit a violent act. If we don’t identify that person, which is called false negative (forgive me if I get the terminology wrong), we also having a certain social cost of not catching that person so they don’t commit something. Whereas let’s say another case where we falsely identify certain people as belonging to a category. Let’s say we are looking at ISIS supporters, but then we start profiling a broader community of people who might not be supporting them, we are then also creating suspect communities. In these both cases, we are dealing with a certain kind of trade-off between, I think one of the participants of the workshop said it really well, that this is something that we cannot solve by the technological considerations. But this trade-off works according to ideas of privacy and all kinds of other issues that we want to bring in there. 

In a way, I think, answering to your question in a roundabout way – again, I don’t want to go into the diversity of different motivations of why people go into extremism research – but there is the risk that we need to also look at what is involved in this kind of performative act of dealing with communities as extremist or something else. And what comes out of that. I don’t know if that kind of answers your question in a roundabout way. 

Eugenia Siapera (00:30:15)

Yeah, or probably created more questions. Because I suppose, it’s one thing to come up with a false positive or false negative, but I wonder, in general, in social science methodology is we have these kinds of concepts that I suppose are helpful in helping us understand the value of the methods and the value of the research. And I’m talking now about the concept – or the construct if you want – of validity, to what extent the methods are doing what they’re saying they’re doing. So it’s not only that they may not be as accurate, but in general, are they actually doing what they’re saying they’re doing? 

So this kind of validity that I suppose in more qualitative research, for example, in content analysis, you get people to agree, or you used to other kinds of proxies, give it to people who will know and then you check the validity, but what kinds of procedures are followed to ensure or to approach any kind of validity of these methods?

Matti Pohjonen (00:31:15)

Yeah, so again, I think there is the challenge of generalising that there is one form of validity that would be used. I think we then once we go into the idea of different debates on validity, for instance, inter-coder reliability, or I think it’s called face validity, all of these bringing different elements into the situation.

I think we can pick up the idea of ground truth slightly later. But what I’m interested in here, and we can then start generalising this, if you look at the idea of extremism, in a way, that we are trying to identify certain debates with a certain purpose. What, in fact, are we trying to do when we’re trying to identify certain things? We’re trying to come up with the classification, something belongs to a category, and something doesn’t belong to a category. Oftentimes, the way this becomes validated that we need to have a certain… I’ll have to bring in the ground truth to explain this in proper order. So the way that we are validating if something belongs to a category, we need to have this idea of ground truth, based on which, we determine what is the category. 

So in terms of let’s say, what are the characteristics that we decide belong to the idea that somebody belongs to a certain group or extremist group, or if you identify what are the characteristics of hate speech. And this ground truth, according to which we validate that, and the ground truth is the standard according to which we computational methods usually validate the results using different things. And we can go later into supervised learning and other things. 

The idea of ground truth is not that significantly different from qualitative methods. If you look at inter-coder reliability, oftentimes, the ground truth, in the historical sense, there hasn’t been enough effort put into validating ground truth. But nowadays, the system now with all the criticisms, there’s a lot more inter-coder reliability going on. So, the inter-coder reliability, you get a number of people who agree on what is the accepted definition of a certain term and then we use that as a way of classifying debates in the analysis. This is supposed to be a process where you get different types of people to agree on certain things, oftentimes, it is basically dictated by the research project. 

As you know, as a researcher, when you have a codebook, you have sometimes about 20 to 30 pages of instructions of how this ground truth or this inter-coder reliability is supposed to work. And once you iterate people finally agree on a definition. 

A lot of other ways of doing this validity, in a way are not that different from earlier methods. What is interesting then, is that if you look at this idea of what is it being validated, this problem is not as much about what happens after we have come up with the inter-coder reliability, or what has happened after we find the ground truth, it’s more challenging to determine what this ground truth is in the first instance. And that’s the kind of thing that I wanted to bring in, in the blog post as well, that we can always come up with minor tweaks to the method that will improve the accuracy of how we measure the system based on the ground truth. But that doesn’t say much about how we come up with the ground truth in the first place. And so there’s a lot more debate that needs to be had on what it is. 

I think in a way, in a broader sense, if you look at research on extremism, and the more dark sides of digital communication, there is this idea of what, in fact, we are measuring this towards. And it works relatively well within certain problems, sometimes there are serious challenges with that. One of the reasons I bring this into my understanding as more anthropological background, or the research that I have done, is that when we were working on – and this is the motivation where I come into the situation – when I was doing work in Ethiopia on online hate speech debates, we were actually hosting workshops where we are trying to debate, how do we understand who is responsible? What, in fact, is it? What are the harms? And how do we prevent it? Hate speech in the country, online, and using it.

There’s a stakeholder meeting with the opposition and the government representatives, the church, the media. For a long time there were a lot of very heated debates, and it was very difficult to get people to agree on what that definition was. There was one comment that one of the old professors who has now died, I’m trying to find it here […] the comment that he made, as this more senior professor, you summarize these discussions in your elegant way, what he said I’ll quote, “These are the horns on the head. This is hate speech, fear. It is very obvious that what we are looking for here is war for the mind, dirty or clean, there is war for the mind going on pro and against government. What should we do. If the opposition succeeds in building on big hordes, people will be afraid, and vice versa with the government. Can we remove the horns. We are looking at conflict. In conflict, we have two sides. In conflict, we must make both sides good. Two good groups to talk together, if one is entered as bad or good, there is no way forward.” 

Basically, what I took away from that very long research process is that once we start coming up with certain definitions, it’s very hard to agree on the way these definitions work. They are in fact, part of a broader social and political conflict itself, in places like Ethiopia. And so if we bring that back into the system, the way we are measuring things, what he was saying is that we shouldn’t look at what it is, but how we present some kind of underlying dynamic. 

I’ve been reflecting on this a fair bit in some of the work that I’ve been doing. In one of the other pieces that I wrote, I started looking at this, the way we framed the metaphors between – we’re often seeing, on the one hand, this kind of extremism, or it’s often more common in the West, where we have this center core, around which we have loud speech or loud activity, and then we have the situation around it, which is forbidden or should be restricted. Whereas in conflict, you have two sides, and they are opposing each other. And in between, you have spoilers, people who want to make the conflict worse. And there are people who have made the conflict better. By this kind of example of having done work in a different context, we already see two very different constitutive metaphors of how we frame the issue, or frame the idea of ground truth. 

One is that we have a very solid set of definitions that we can apply to certain situations. Or alternatively, we have the situation itself is very difficult to come up with a consensus and we need to look at the tensions underlying it. So not saying that this should be something that applies to all the different contexts but it’s why in, let’s say, we look at extremism research, this is one way of framing the problem and what might be involved. Again, I went very roundabout way, so I don’t know if that answered the question. 

Eugenia Siapera (00:38:40)

Yeah, absolutely. And then to make it a little bit more concrete, for example, what I had in mind, more specifically is that in situations like this in real life in social life, and we’re observing – the example of Ethiopia is perhaps instructive – but I’m thinking that it’s perhaps problematic to think of all the voices as having equal validity, because quite clearly, the voices of people who tend to be harmed by more extreme speech, have to count more than the voice of those who are doing the harm. So I suppose the ask here is to find some kind of ground truth consensus, that at the same time centers the voices of other people who tend to be on the receiving end of potential harms. And not only by the outcomes of extreme speech or extremism, but also by the methods themselves. 

So I suppose if you are to design, maybe an artificial intelligence system that looks to detect, or – OK we’ll come to the prediction later – but even in this in the sense of detecting, what exactly is it detecting, and whose job is it doing, and what kind of master is it serving? That is the question that I was trying to get at, both earlier when it comes to values, and now when it comes to ground truth. That is perhaps more like an epistemological question.

Matti Pohjonen (00:40:15)

Very briefly, on the first question […] we are dealing with a lot of both researchers that are doing seriously good work, lumped in the same category […] Extremism research is done also by certain companies in content moderation, but it’s also done by the police security services. The reason I’ve been saying that, obviously, is you cannot lump all these people in the same category in terms of the motivations. However, as you’re alluding to, a lot of these methods, the way they are being employed – and a lot of critical research has gone into this – also serve the interests of certain political systems. If you look at predictive policing, or if you look at border control, or the war on terror, and there’s been books written on this, they have been predominantly, at least in the earliest stages, employed or powerfully employed by a lot of state actors. 

But at the same time, I think where I started off with, is that I think now, the use of these methods is now also becoming mainstream. We are seeing their use a lot in anti-racist hate speech research and other things. So it’s very difficult to lump different people and the same category. There might be motivations in people who are doing, for instance, anti-racist hate speech, or working against far-right extremism, who have different motivations. And even though the system of the mechanisms that they’re using are, in a technological sense, are the same, I don’t know if that makes sense.

Eugenia Siapera (00:42:00)

Yeah, it does, though, I’m pretty sure that there has to be a way of agreeing certain ethical parameters within which this kind of research applies. But let me just stop here very briefly and encourage our listeners to submit questions via chat or by the Q&A function, so please feel free to kind of like put your questions there and then we’ll go through them as the discussion proceeds. 

So the next area that I wanted to probe a little bit further with you, Matti, concerns the problem that I suppose is the most readily identified by both computer scientists that develop some of these methods and social scientists as well, which comes to which is the problem of the data, the training datasets for machine learning approaches. So you spoke in your blogs about like, supervised methods, unsupervised methods, and also about the more technical side of things, but if I understand correctly, a lot of this hinges basically on the kinds of datasets that you use in order to train these systems. So then we know, as you mentioned earlier, that a lot of these datasets are generated on the basis of convenience. These are data that we have available, for example, from Reddit or wherever you can harvest them. So I wonder if this, the use of training datasets adds an added layer of complexity and bias into these models, and also the systems? And also, to what extent is it really fixable? Because we hear a lot about bias in algorithmic systems being fixable, because it’s a question of data. But I wonder if this is really the case.

Matti Pohjonen (00:44:00)

Yeah, again, we’re getting a number of very relevant and I think, also very challenging and difficult questions. Yes, the third blog post – or the last blog post if you have the introduction – talked about predictive modelling and looked at some of the models. The common story here that we have heard a million times is that garbage in garbage out. The models are as good as the data that is being put into them. What was interesting around the evolving of these computational models that we now have this increasingly relevant and increasingly powerful debate on AI ethics and AI bias, that a lot of the datasets, that especially the more sophisticated systems are trained on, in a way, bring in certain latent biases from where they are derived from. I mean, there are multiple issues from a global perspective, with parts of the world we don’t even have data, so this issue doesn’t even exist, because there’s nothing. The bias is globally structural. But in the domains that we have been using them, so these language models might have certain… and this is often the legacy of the way datasets exist in the real world. So in a way, cultural biases seep into the datasets being used. And then by proxy, this becomes relevant to the models we’re using in research, because we might then reproduce this bias within our results and ad infinitum, so it’s like sedimentation of different levels. 

There’s two ways of potentially approaching this, we can start thinking about. And there’s the whole de-bias AI debate, which I think is extremely relevant. It’s a way of identifying problems within these datasets as a way of curating them better, so that the results won’t be as unrepresentative of certain populations, including race, gender, ethnicity, and so forth. 

And then there is a second option that I have been thinking about a lot, which deals with the fact that, AI is also a window into already existing cultural biases. One of the debates that I have been interested in here, is how, or can we then create an unbiased dataset? And that’s the question you’re asking, is it possible, or are we barking up the wrong tree here? When in fact, should we include people who are vulnerable people who are implicated in the research in the process of developing these datasets, because the results, in a way, affect them. And so, I think this is what you have been talking about in some of the other work. 

There is, as an old cultural studies person who read a lot of cultural studies, there’s a good quote by Roland Barthes from the 60s, one of these cranky old Marx critical theorist. He said that “objectivity is the ‘unauthored’ voice of the bourgeoisie”. By that he meant that what is not being said there is basically where the power lies. So even in datasets like that, should the question then become, should we aim for this univocal, unproblematic, unbiased ground truth that we build the models on? If we just improve them and make it better? Or should we then foreground the fact that this construction of AI models itself, within extremism research, for certain problem areas, within hate speech research, should also be a part of a research problem that we’re working with, that should include also the social, ethical and political issues into it. 

For instance, there are examples of participatory dataset building where, because – as I mentioned the Ethiopian case earlier – we are dealing with potentially social, especially in sensitive fields that we are researching, we are dealing with social antagonisms, political antagonisms, all kinds of issues that are also implicated in the datasets. In a way, it becomes a choice of how we curate them.

I think these are the two ways, once we started approaching from this sense, I think the question about un-biasing datasets could also be turned around, or de-biasing datasets, that we are very careful of the type of datasets we want to curate for a certain purpose, and hence the models we also work towards the purpose we’re trying to try to aim for. Rather than using them passively and kind of unconsciously. Obviously, when you look at dogs and cats is not really a big issue, but when you’re dealing with vulnerable communities or if you’re trying to identify certain malignant actors, it becomes a lot more sensitive. That’s the kind of debate that I don’t have any clear answers but I think that some of the debates in a very subtle sense in the post that I was trying to raise as well.

Eugenia Siapera (00:48:40)

This is very interesting for me, because my background is in journalism studies, and this has been a perennial discussion within this sphere: can you have objective journalism? It’s impossible. But then at the same time, you have to have journalism that at the very least, is accurate. So I suppose this debate, and if not solutions, then the norms that have emerged within journalism or communication studies could possibly fit into this debate about training datasets. 

Also, I suppose something that is made clear by Barthes and others, is that you have to make your ideological stance very clear. So if you’re saying, OK well, I want a system that is explicitly feminist. And this is why I suppose I wanted to start this question with the issue of value, and then perhaps now we return back to the issue of values. So then, if you make it clear, then a lot of other pieces fall into place. So you cannot assume the place of God and say, OK, now we’ll recognize who is extremist and who is not extremist, and who’s the bad guy and how to take them out. But then you see where all the pieces fall together. 

And I suppose this brings me to the last set of questions that I have for you, that concerned the outcomes of this research. Because in part, of course, the research on extremism is oriented towards theory building, as we mentioned earlier, but a very real part of it of this is that this is actually research that feeds into social policy. 

So if we take into account all these issues that we discussed, but also the requirement by governments, by other kinds of actors, to produce an empirical basis for policy, how can we deal with this conundrum? How can we say to them, what can we say to them? How accurate is this research? Especially when there’s pressure to say, OK AI can predict, tell us now then, what do you predict? It’s like a new, kind of oracle, I suppose, in a way.

Matti Pohjonen (00:51:00)

There was a term called ‘solutionism’ at some point. 

So I think […] yes, there is some feedback in the policy, but oftentimes, academic research functions slightly separate from policy, and sometimes it doesn’t have the power to influence policy. If you’re lucky, it does have the power to influence policy. 

I think one of the things where it becomes increasingly relevant is then to use research to identify some of these issues. And to bring in, even though that might be the nagging, critical person in the back of the room, that by the way, have you thought about this? Have you thought about this? Rather than providing, OK here’s the easy response, that is what is happening online. You also use that opportunity to problematize some of the ways that you’re dealing with it. 

I think the big elephant in the room, or the big question here, is that a lot of the data that is currently being used is not something that we researchers have access to. So the biggest datasets exist with social media companies, they exist with governments, and they’re using them regardless of what we’re doing. And one reason why I think it is relevant for social scientists and people working in the extremism field, to be actively involved in that, is that the situation is too important to leave either to computer scientists, or to leave to these companies alone. So we can bring in some of these additional questions into what has been going on, which might not be always as solution oriented.

What I’m doing in my research is we’re working on content moderation. So I’m looking at how AI systems are then being used in content moderation vis-a-vis different countries that have not been usually covered. Trying to raise questions about what does it tell more broadly, about certain social and political policies. And at the same time, I don’t dismiss, even though I’ve been very critical of the research being done, I think criticism is often the best form of flattery. So by involving in this kind of dialogue that we’re doing, we can then also make sure that the methods we use, or that people are using in more classical extremism research as well […] you make sure that in such important issues, you are very confident of the results that you present, because they might have policy impact. I think that’s the most we probably can do as academic researchers.

Eugenia Siapera (00:53:30)

I wonder though, Matti, if we could – I know there are important discussions taking place elsewhere – could we participate in formulating a set of norms and ethics. And then, perhaps, guide research in to this. I know that all of us now in universities will have ethics guidelines, and every piece of research has to conform with these ethical guidelines. But I wonder if we could specifically prepare a set of codes, or a set of norms, that will function as a set of parameters that guide this kind of research, and there has to be certain limits. For example, when I’m reading your blog, your blog on prediction, and knowing also what is going on in certain – I suppose, like in the US with predictive policing, and in the UK with risk assessment in terms of welfare fraud and welfare entitlement – I’m very concerned, I’m very concerned as a citizen, as well as an as an academic that, in part, our research could be used in this in this way. 

So I suppose what I’m asking is, could we say ‘no’ to a particular kind of research, even if there is a possibility to develop the methodological know-how?

Matti Pohjonen (00:55:15)

So there’s two different things here. One is that there can be certain types of ethical guidelines, whether people will follow them, that is of a different question. Because I think such guidelines already exist for ethical computing, and they are they are coming up with new ones, so yes, that can be happening. And also more specifically linked to extremism research, that might make sense to do some checks and balances and guidelines. And I think there’s a lot of good work already being done on that. 

But I think more broadly, what we are dealing with, the way I see it, is that we are now in kindergarten, and we are training systems, and they are increasingly becoming better for doing certain things. So like in school, we should have a social and political discussion of how we want to raise our children. We need to have certain kinds of literacy programs. Because obviously, the values are not going to be absolutely something that everybody can agree on, but as with school, we are telling them, we should teach them a certain understanding of the world. In a way, something like that, a form of training guidelines for how we want to train these cumulative systems, and what are the issues involved. I think that definitely is an important task. 

Another thing that we should probably don’t have enough time to talk about is datasets. I think datasets – because every data set to a degree is kind of manufactured or synthetic – we should also potentially start seeing datasets as a public good. Which is also like libraries are being curated by a certain public understanding. 

So that we want to use these datasets – now that being open source, and there’s work being done there – but we should also be, OK how do we know that this dataset is useful for this purpose, and if it’s true that it doesn’t have these biases in it. Also a lot more public investment should be going into creating these puzzles and tools that we can use, these fundamental pieces of the tools that we can use. I see it this way, I think both are very useful. I think this kind of work needs to be done, and I think, there’s a lot of good work being done at the moment.

Eugenia Siapera (00:57:30)

I’m very happy that the conclusion is that we’re at early stages yet and we still have a lot to learn. So I suppose our time is running out, and I still haven’t heard from anyone from the audience. Oh, there is a question.

Matti Pohjonen (00:57:50)

We can go into more specifics. I know we have been approaching a lot of abstract issues here. Very happy to take on more specific questions.

Eugenia Siapera (00:58:00)

Yes, there is a question here in the chat that I’m going to read aloud, and I’m sure you have access to it, Matti, as well. 

“In radicalisation, extremism research we oftentimes don’t have access to large data regarding individual digital trace data. Hence, observing radicalisation paths of individuals is nearly impossible. Could you elaborate on this issue a bit? What are possible work-arounds? I think one possible solution could be measuring ‘radicalisation potential of digital spaces’ (i.e. certain telegram channels), and to conclude about its danger for people to radicalise themselves / get radicalised.”

Yeah, it’s a very interesting question, because then you have to follow a lot of steps to get to – what is danger? What is radicalisation? But I’ll let you address this Matti. 

Matti Pohjonen (00:59:50)

In 2017 there were a lot of really depressed researchers. This was because Facebook and other platforms started shutting down the access of data. And while simultaneously because of the crackdowns, a lot of activity went on to channels like telegram. So a lot of the methods that had been developed that were very good in large datasets, became more difficult to work with, especially in platforms like telegram. 

My limit is about 1000 Excel lines, and if it goes above 1000 Excel lines, you might be better off using some computational method. But if it’s under that, why should we do why some of the computational or big data methods are probably not the best to be used. So why don’t we just use old school participant observation or other methods that are more suitable for closed platforms? In that sense, if you want to look at, for instance, debates on radicalisation, you can follow this rather than trying to come up with certain measurements for what is radicalisation or not, but do more qualitative research on that. So again, yeah, that would be my kind of roundabout answer to that. I don’t know if the best solution would be having some kind of statistical criteria for radicalisation when you can be qualitative. 

Eugenia Siapera (01:00:30)

I wonder […] if you could identify certain discourses that have been associated with radicalisation in the past, and then you encounter them in specific telegram channels, that might be an indicator of what is the potential, and if you could [unheard] this in terms of incidents and so forth. So I suppose there are workarounds. The question I suppose is like, what do you do with this? And also because the radicalisation discourses tend to be quite dynamic. So even if you train something, by the time you operationalise it, it may have shifted to something else. So they’re always playing a catch-up game, I suppose. 

Matti Pohjonen (01:01:20)

But to be slightly more positive, now Twitter has opened up their API for academic researchers, so we have some hope at the horizon. Because gloom and doom for the time being that everything was being shut down. But at the same time, one of the things that I mentioned in the blog posts, and we can also highlight here is that […] the availability of data from a platform, does not necessarily mean that it is the most relevant data for a phenomenon. The fact that a lot of the methods that have been developed on Twitter is because it’s very easy to access Twitter data. However, as Julian is saying a lot of activity now happens in Telegram and slightly more obscure different channels. So it’s also a balance between how suitable are our methods and how suitable are our findings if we are more data driven, rather than problem driven, or theory driven. But yeah, good question. 

Eugenia Siapera (01:02:20)

So I don’t know if you have any other remarks, Matti, or if Maura wants to come into the conversation or maybe post some questions or some conclusions.

Maura Conway (01:02:50)

Thanks, Matti. Thanks, Eugenia. I just actually posted the link to the Twitter API, I guess that they’re making a lot more of their data available for free to researchers, which is positive in many ways. And an interesting aspect is that I see that highlighted on the top page there is in fact research out of Cardiff University, on online hate that’s using Twitter data. 

I want to thank you both very much. I wondered if you had anything that you hadn’t yet had the opportunity to say or anything that you wanted to say in terms of wrapping up.

Matti Pohjonen (01:03:40)

So well, but maybe I’ll repeat the thing that criticism is the highest or the most sincere form of flattery. So by no means by providing a critical perspective, do I dismiss any of the methods or the mechanism being used, because I also use them myself, and I am very interested in exploring how they’re being used for various purposes. 

So I think one of the takeaways from the blog posts is that there are certain questions, or certain kinds of checks or tick-the-boxes that are very useful to raise when you’re applying different types of methods. Automatically, by the fact that they are there, you don’t necessarily need to use them with other ways of approaching the problem. But at the same time, when you use these methods, it is good to be aware of some of the debates that are increasingly going around them. And these debates are very prolific and going in lots of directions. That has been done with the motivation that ideally, the methods can be then used in a more effective and potentially more sustainable manner. 

And so that’s why when we did the blog posts, I was trying to put a lot of pointers that here if we use content analysis, you know, some of the debates that are now going on at the moment, or if you use predictive analysis here at some of the debates that are going on. And trying to highlight the fact that these situations are not always technological, or questions of validity, but they raise broader social and political questions around them, and these have to be highlighted as well. So in a way, the blog posts, I wish I would have written a book on them, but they ended up being a kind of an entry point into a rhizomatic, a big field of different potentials and methods and things have been used. Hopefully people will get something out of them, and then can do their own research and find other perspectives. I think that’s my conclusion. 

Maura Conway (01:05:40)

I was just gonna say, just a comment as an audience member and listening to the conversation, that I feel like this is a good time to really put these ideas on paper and whatnot, because we see, just in the last few days, there is a special issue of Terrorism and Political Violence that has just been published. That is about terrorism and ethics, quite broadly. When I was listening to you talk there Matti, I was thinking that a lot of your critique, and the concerns stated by both yourself and Eugenia are broadly ethical issues. So while some of the papers in the special issue were about quite practical ethics issues, one of the things I think is missing is precisely the sort of practical ethics issues around the deployment of these tools in our specific space, online extremism and terrorism research, but also levelling it up, the broader ethics issues that these approaches use also, that Eugenia was coming back to you about quite a lot. 

So I guess my point is, I think it’s very positive that we are beginning to talk more about ethics in extremism and terrorism research. We’re certainly not early in terms of the ethics conversation, well, you know better late than never, but plainly there’s a bunch of additional stuff that has yet to be raised that needs to be addressed in our field. And I think some of the things that you raised, Matti, in this conversation are important ones, absolutely. 

Matti Pohjonen (01:07:30)

It’s also about finding a vocabulary to describe, oftentimes, it takes a lot of time to find a vocabulary through which you can describe it. So we’re dealing with both methods plus a certain field of research. And to find that find that vocabulary takes a bit of time and takes a bit of working and there might not be any simple way of coming up with it. So hopefully we can find a vocabulary that helps us address issues in a more systematic way. 

Eugenia Siapera (01:08:00)

One question, or comment, we have from Liz concerns the transparency in making decisions about what data to look at. And she’s referring to the Twitter tendencies to… basically, if you focus on one tweet, you get one set of findings, if you focus on retweets, or the context in which the tweet was tweeted, was posted, you get different kinds of findings. So Liz makes the point that you have to be transparent in terms of the methodological decisions you make.

Also, she says “we have had real problems in getting to the bottom of what search criteria Twitter uses when searching and selling data, it’s very ambiguous but less so now that it is available on the API. Big data research is not always as easy as people think but these issues are often hidden in the findings people present.”

That’s absolutely the case. Yeah. And I feel guilty here, because that’s what we’ve done in the past. But I suppose people then read your article and then they constructively engage with it, hopefully, and creatively engage with it. 

I also wanted to bring to the discussion an area that we didn’t touch at all, but I think it’s quite important, that concerns the safety of the researchers who were doing this kind of research. And on the one hand, I suppose when it comes to extremism you’re much safer if you’re behind a screen looking at some really unpleasant people. But on the other hand, in terms of your mental health, and your exposure to this kind of toxicity is something that has to be taken into account. So I suppose the ethics is not only the ethics of the research, the ethics of protecting researchers themselves as well. 

Matti Pohjonen (01:10:00)

Then, maybe just quickly, responding to Elizabeth was saying, I completely understand, and I’m also guilty. I think it comes incrementally with skill, because these systems are very hard to use. And there is something to be made for the reproducibility of research, which is one of the standards that computer science often at its best operates at. Not in practice always, but at its best, operates. So that we know exactly through what kind of data, the results have been derived, through what kind of algorithms. 

But the challenge there is, that then takes a lot of training of researchers who don’t have the technical background. One of the things that VOX-Pol has been doing, and part of the workshop, is trying to have a debate or dialogue between computer scientists and social scientists, because we have, as an anthropologist, I have a lot of blind spots. And how to use Twitter API, it is close to ritual magic in a more anthropological sense. But the more you do it, the more you learn the different things. So obviously, transparency, of what type of data, how it’s been collected, and the challenges of that, that should be the first step for research. Because you’re not hiding anything, ideally.

But yes, ethics I agree, I do agree. It is even a more pressing challenge in that more ethnographic research because you are not separated. There have been interesting articles about researching people who you fundamentally disagree with. And all the content moderation mental health issues is a big thing. So again, yeah, it is something that we all can remind ourselves and learn that, let’s try to be more transparent and more forthcoming about everything that we’re doing.

Maura Conway (01:12:00)

That’s maybe a good sentiment to hang on to be more transparent and more forthcoming about what we’re doing, and also researcher welfare issues. 

I want to thank Matti for all his insights and Eugenia for coming up with some great questions. I want to thank everybody who stuck with us past 5pm. We’re going to put the video up on our YouTube channel, we’ll tweet about that when it’s available. And you can follow VOX-Pol on Twitter. And we hope to see some of you at future events but for now, have a great evening and thanks again.

Leave a Reply