Unsung digital sociologists

No excuses for any citation #fail.

An open list of women scholars who are addressing digital technology’s impact on society and vice versa.

Please feel free to add to it!

Click here

Understanding the political economy of digital technology

A BSA Digital Sociology Study Group event hosted by the Web Science conference at Vrije Universiteit Amsterdam May 27th 2018

BelleVUe building, De Boelelaan 1091, 1081 HV Amsterdam, The Netherlands.

Book >>> Here (If you only want this event choose ‘Events Day Only’ option) 


All the 750 word abstracts are here >>> Amsterdam Papers

10:00:00 Paper 1 The political economy of a large-scale hypertextual Web search engine: a critique of linguistic capitalism and the side effects of Google’s advertising empire

Pip Thornton, Royal Holloway University of London @Pip__T

10:20:00 Paper 2 Hacking the Political Economy of Youth

Shane Duggan, RMIT University & NYU Steinhardt @ShaneBDuggan

10:40:00 Paper 3 Platform Capitalism and Political Consumerism: Perspectives of Boycotting and Buycotting in the Sharing EconomyGiulia Ranzini,

Maaike van Vliet and Ivar Vermeluen, Vrije Universiteit Amsterdam

11:00:00 Break

11:20:00 Paper 4 What does its (un)ethical approach to children tell us about digital capitalism’s circuits of practice?

Huw Davies & Vicki Nash, University of Oxford @huwcdavies

11:40:00 Paper 5 Post-scarce informational resources, unequal skill development and class stagnation

Chong Zhang, University of Durham @zhangchongzc

12:00:00 Paper 6 Big Tech’s privatization of schools: a case study of computing education in England

Laura R. Pinkerton,
 University of Oxford @laurarpinkerton

12:30:00 Lunch

13:30:00 Paper 7 Home and Walled Garden: The Political Economy of the Smart Home

Murray Goulden, University of Nottingham @murraygoulden

13:50:00 Paper 8 The future of information freedom after the internet: balkanization, fragmentation, territorialization, or diversification?

Julian von Bargen, York University, Toronto

14:10:00  Break    

14:30:00  Interactive Panel    

Our panel of leading digital sociologists will discuss questions and challenges from the day’s attendees and crowdsourced questions from the wider digital sociology & web science communities participating online.

This is a chance to have your say, shape our future events, and help us progress the digital sociology project.


Karen Gregory @claudiakincaid

Susan Halford @susanjhalford

Jessie Daniels @JessieNYC

Tressie McMillan Cottom @tressiemcphd  (TBC)

16:00:00    Drinks Reception

18:00:00    Close


Understanding the political economy of digital technology

A BSA Digital Sociology Study Group event hosted by the Web Science conference at Vrije Universiteit Amsterdam May 27th 2018

In more optimistic times we thought of ourselves as masters of digital technology: we told ourselves it was empowering, liberating, and democratising. Today, there is growing concern that we have ceded control of digital technology to digital capitalism’s rapacious market monopolisers whose former insiders, in their epiphanies, tell us have ‘ripped apart the fabric of society’. All corporate algorithms are black-boxedprotected by intellectual property law. Concepts that describe them such as AI and machine learning are problematically slippery and esoteric. So we are told algorithms that we can’t see or understand are to blame for digital capitalism’s social and political effects. This is a particular concern for sociologists because those who suffer material and social inequality are increasingly having their life chances defined by these algorithms (see for example Eubanks (2018)). Perhaps the tech companies aren’t “anymore equipped to self-regulate any more than the fossil fuel industry” (Umoja Noble, 2018): it would seem the best we can hope for is to judge them by their results, attempt to legislate, or petition technology’s plutocrats to stop ‘doing evil’. 

All these issues, however, share an overarching theme: technologies are made and deployed within a political economy that incentivises, allows, enables or rewards actions that draw us away from visions of digital technology – particularly the Web – as a transporter for the Enlightenment’s values. Driven by the logic of extracting the maximum amount of the surplus value from our social and economic transactions and our (often very personal) data, these companies have ruthlessly and relentlessly pursued economies of scale to leverage their platform’s network effects: whatever the social cost. Interpreted through the political economy, problems with fake news, the attention economy, surveillance, the power of Silicon Valley etc. all demonstrate that politics, economics and digital technology are now indivisible. Addressing the political economy of digital technology more explicitly will help explain who are the ‘we’ in this instance, how have ‘we’ lost control and what do ‘we’ have to do to get it back?

This event will showcase some of the scholarship that is currently tackling these issues under the banner of Digital Sociology. As this event forms part of the 10th ACM Conference on Web Science speakers and delegates will have the opportunity to share insights with a broad community from a diverse range of academic and professional backgrounds. We also invite contributions from members of all disciplinary fields that provide insights into the relationship between digital technology and the political economy. How does the political economy affect your area of expertise? What needs to change and how can it be changed?

This is only a brief list of suggestions: we welcome contributions on any topic that addresses the day’s theme.

  • Fake news, propaganda and public (dis)information
  • The digital public sphere: reconsidering democracy
  • Digital surveillance, high volume data and governance
  • Changing and emerging industries
  • Digital labour
  • Digital inequalities
  • Digital wellbeing
  • Education

Provisional schedule for the day:

Besides the traditional papers presentations, the event organisers will experiment with other formats, such as the fishbowl, that will allow speakers to engage their audiences in more active ways.

  • 9.30 – 10.30 panel with crowdsourced questions from the Digital Sociology & Web Science communities 

Panelists confirmed to date:

Jessie Daniels, Susan Halford, Karen Gregory and Kate-Orton-Johnson

  • 10.30 – 13.00 paper session 1
  • 13.00 – 14.30 lunch (including networking events)
  • 14.30 – 16.00 paper session 2
  • 16.30 – 18.00 fish bowl – a moderated interactive session where attendees can have the floor to discuss the burning issues (including what a bigger Digital Sociology event should look like and how it could be organised).

To present your paper please submit an extended abstract (up to 750 words) to our easychair page here by midnight GMT on Friday the 30th of March 2018. This should include an indication of the substantive issues and how they relate to the day’s central theme. Decisions on abstract submissions will be communicated by midnight GMT on the 6th of April.

Successful submissions will be put forward for journal special issue (details to follow shortly).

A breakdown for the registration fees for the day (and the full conference, which includes keynotes from Prof. José van Dijck, and Sir Tim Berners-Lee) can be found here.

What We Don’t Want to Know About Teenagers Online

Slides here

Tuesday 4 April 2017, 11:00 – 12:30 PAPER SESSION

As their time is increasingly colonised by extensions to the school day and their public presence continues to be problematized, teenagers are being pushed back to their homes, where many will remain domicile well into adult life. Consequently, digital spaces are becoming a crucial means to young people develop their identity as they transition to adulthood. Simultaneously, teenagers are being watched, controlled, and warned by anxious by parents and authorities. They are also politically burdened with the expectation they will become investible units of human capital that will fill the jobs of the future and produce growth. We may be reassured that we can teach young people to conform to our expectations.

In this paper we present the findings from an in-depth qualitative study of teenagers (from a range of socio-economic backgrounds) in two schools in the UK. Through 50 interviews with 13-18 year olds we examine how (if at all) and why young people exercise agency by adapting to, challenging, or subverting existing adult normative socio-technical cultures and expectations. Practices explored include: avoiding monitoring and control, bypassing/circumventing age restrictions and parental controls, testing boundaries, managing parental anxieties, being provocative, inventing/managing multiple identities, dealing with contamination and intrusions – including advertising and educational initiatives, using backdoors such as proxy sites and onion routers, dealing with threats and engaging with misinformation. Identifying young people’s counter-normative practices helps us critique the construction of young people as ‘homo œconomicus’ – saviours of our economy.

Kin-Work: Digital Sociology and/against STS

Des Fitzgerald

School of Social Sciences, Cardiff University

Please note: this is the text of a spoken talk given at ‘Digital Sociology vs STS,’ at the Oxford Internet Institute, on 07 December 2016. It is not really intended to be picked over in great detail! It contains some things I’ve said elsewhere, and various implicit gestures to other work. I’ve very, very lightly edited it, but basically kept it in the general format of a talk to make these origins clear.  

Thanks for the invitation. I should say that I’m here more or less ex officio – and in fact reluctantly so – as the co-convenor of the BSA STS group, with Stevienna De Saille (who sadly can’t join us today).

I’m also here largely because Huw couldn’t get any STS people to actually talk –which is maybe a data point that we should dwell on at some stage.

Anyway, with this in mind, it’s probably worth being clear at the outset, that I don’t actually know anything about digital sociology. And if that’s understandable, something that’s maybe less forgivable is that I’m not exactly an authority on STS either. Or at least I’m not someone committed to the conceptual and methodological infrastructure of that sub-field, and am often, in fact, quite hostile to it. So.

So: I’m not going to offer any kind of STS analysis of digital sociology, whatever that might be. What I do want to do, though, is first to think about digital sociology through a related framework that I have been interested in – and this is a perspective that we might call the social life of epistemic things, or, maybe better, the life history of discipline. I then to want to say something brief about two other longstanding hobby-horses of mine, which are ontology and collaboration.

In my own work, for the last couple of years, I’ve been trying to think about the edges between sociology, STS, and biology – and doing so on the basis of some idea that the normative projects of British sociology and STS would gain from a more nuanced account of how biological and social agencies inhabit one another. What has been striking to me, though, in trying to think those edges, is that the more and more you look through the archive of British sociology, the more you find these odd strands of biological material – entrails, I want to call them – draped across the history of the discipline (I think, for example, of short lived experiments on social biology at the LSE; on the ecological and psychiatric leanings of Chicago school sociology, and so on. The historian Chris Renwick has done huge amounts to draw together some of the key moments of this history).

Much of my own work, then, has been about calling attention to sociology’s long, complex, ambiguous relationships to psychiatry and psychology, to social epidemiology, to human ecology, and so on. And I am increasingly interested in the professional invisibility of these entrails and the commitment to a normative sociology that, if it is nothing else, is not biology.

I raise this here because I think it’s worth remembering, in thinking through the project of a digital sociology, that sociology is a discipline with complex and sometimes unexpected inheritances; that what look like the transformations of the present have histories, and disciplinary histories at that. So partly here I’m simply wondering if there are not other, older moments of computational and algorithmic thinking in sociology – and I’m wondering how such moments might animate, torque or thwart, the self-imaginary of a digital-sociological project. (I know at least one account of that, which is the work that Patricia Clough, Karen Gregory and their colleagues have done on cybernetic genealogies in sociology). But I also raise this as a way of calling attention to intensity of the police-work that surrounds the sociological project today. I think here, for example, of John Holmwood’s anxiety about sociology’s internal ‘self-subversion,’ as new formations split away from the ‘core.’

I think the presence of this kind of boundary policing, and the concerns that animate it, should be at the centre of our discussion.

But more critically maybe on this point (and here is a characteristically STS question) I want to ask how digital sociology imagines its lateral affinities, and how it thinks itself in relation to other kinds of hybrid disciplinary formation. It seems to me that these go in two directions: one to other projects that centre on the digital – the digital humanities, obviously – and another to other hybrid forms of sociology (I think of something like a neurosociology). With an STS hat on, and with a view to understanding the imaginary through which digital sociology is now producing itself, I want to ask about the affinities and disaffinities between digital sociology and these kind of projects. How, for example, might a digital sociology be like, and or different form, a biosociology, or a geosociology? In what way do the differences between these things matter?

The second point I want to make is about ontology. So, last week, I attended the second annual meeting of a group called Assist-UK – which is a new national organisation for STS in the UK. The meeting was also part of the 50th anniversary celebrations of the Science Studies Unit at Edinburgh, which of course everyone will know was a foundational moment, not just for what we now call STS in the UK, but for STS, internationally, as such. I guess this accounts for what struck me (in all frankness) as the slightly nostalgic pall that overhung the meeting – with talks gesturing back to older moments in science studies, and with those acts of memory working, in the old recursive style, as ways of both understanding and justifying the present.

Perhaps this is a stupid thing to say about the birthday part of a literal institution – but I was struck by the degree to which STS had become institutionalised, how it has worked (and still works!) towards its own institutionalization. I wondered about the costs of that work – which is to say the costs of becoming an institution. So partly I want to raise the question of what digital sociology might learn – for good and ill – about the work of institutionalizing (in which I think STS, ever a flakey ensemble, is intensely invested) .

Throughout that meeting, I couldn’t get out of my head a remark that Karen Barad makes in her monumental work, Meeting The Universe Halfway, that the foundational mistake of science studies – its original sin, I want to say – was in thinking that there was a difference in kind between the practice of science studies and the practice of science. Barad instead posits a mode of engagement in which an ‘understanding of the entangled co-emergence of “social” and “natural”… factors might best come from ‘engaging in practices we call “science studies” together with practices we call “science.” Marking at least the desire for such a practice of ‘together-with,’ many have since diagnosed an ‘ontological turn’ in STS of course, with even a special issue of Social Studies of Science devoted to the topic. Yet this meeting reminded me of just how much the social study of science was and is committed to a deeply traditional ontology of the social – which is to say: for many of my colleagues, at least at senior and institutional levels, STS is still the putting of science in context.

And as an outsider, as someone put in the position of reviewing digital STS abstracts for the BSA annual meeting, it seems to me that there might similarly be something to be said about distinguishing between a digital sociology and a sociology of the digital – which seem to mark very different ways of doing ontological work between and across the domains of the digital and the sociological. (I know of course that there are ongoing debates about non-representational approaches to the digital, and about what distinguishes these).

I don’t think the important question here is one of figuring out ontologies of the digital – it’s (and I stress here that my question is motivated by the history of STS), a question about the ontological consequences of how different kinds of digital sociology work. Another way of asking that question is: what is it about digital sociology that makes it a sociology? What are the consequences of that naming? What does the noun sociology do that a word like ‘studies’ or ‘research’ or ‘practice’ does not? And what are the consequences of that doing vis-a-vis our ontological purchase on this arena, and thus for how we are actually able to theorize and participate in digital spaces as such?

The last point I want to make is about collaboration. So I’m someone very broadly trained in the sociology of the biosciences, and in STS – but largely all of the work that I do these days is in some sense interdisciplinary or collaborative. I am increasingly reluctant to use those words, however – not least because they position the interdisciplinary researcher as somehow deviant or secondary. To name ‘interdisciplinarity,’ in other words, is to name an object in need of explanation; it identifies the thing that is not discipline; that which cannot be taken for granted (and thus, of course, that which can – which is to say: discipline as usual).

In an account of the positioning of collaborative work within STS, my collaborator Felicity Callard and I have argued that there is a kind of violence to this prefix ‘inter’ – insofar as it establishes an epistemic regime that takes disciplines to be prior, bounded, and stable; it sanitizes histories of rupture and admixture; it covers over the very active work of making discipline thinkable.

I am tempted to say that, today, self-described ‘interdisciplinarity’ is the primary engine through which disciplines are made whole.

I think here of the many collaborative projects through which STS is currently being pursued, not least those projects where, for example, STS scholars takes responsibility for ‘ethical legal and social implications of scientific developments’ – thus committing themselves to the epistemic parcelling-out of expertise, practice, and even affective engagement (several important papers have addressed this question).

Not least as a result of that development, I think one of the liveliest area of contemporary STS is the small literature that that has concerned itself with interdisciplinarity as a problem, that has argued for a great deal more attention to the socio-technics of collaboration, including to STS itself as a collaborative actor. So I guess all of this is to raise a question about the collaborative stakes of digital sociology – to ask a question about the kind of collaborative actor that digital sociology is going to be.


In the introduction to their recent edited volume on Digital Sociology, Kate Orton-Johnson and Nick Prior ask: ‘To what extent is the sociological imagination a sufficient basis from which to embark on investigation into digital worlds with cross or even trans disciplinary indices?’ I like this question a great deal. And of course it comes with the ghost of its own answer – which is: the sociological imagination is a totally inadequate point of embarkation for digital sociology.

In my own related weariness with these terms, I have started to think the work of interdisciplinarity and collaboration as the always in progress work of kin-making – following Donna Haraway, I would like to re-position the work of interdisciplinarity, not as the prim coming-together of two well-established strangers, but as the recuperation of relationships with complex kin. (I’m drawing on Haraway’s most recent book, which I heartily recommend).

All of which is a complex way of putting collaboration on the agenda for Digital Sociology – of saying, based on the sometimes lamentable experience of STS, that collaboration has a cost, and that that cost is sometimes very high indeed. So I want to ask you whether kin-work is not a better point of embarkation, for digital sociology, than the now tired trope of the sociological imagination or the interdisciplinarity. And I want to ask if recuperation is not a useful logic for wading into the transdisciplinary indices of digital life.

Kin-work is of course a good account of what we’re trying to do here more broadly today. And though we set it up in a playfully antagonistic way, I want to argue for STS and digital sociology as the ‘core discipline’s own awkward kin – two sub-fields brought together by the degree to which they are both alive, albeit in different ways, to the transformations of the present.


Digital Sociology v STS

A joint Digital Sociology Study Group and STS Study Group Event at the Oxford Internet Institute

Wednesday 13 December 2016, 13:00

The Oxford Internet Institute 1st Giles Oxford, OX1 3JS

The concept of Digital Sociology has been in circulation for around five years now. But if the British Sociological Association’s annual conference is anything to go by, ‘the digital’ is still on the periphery of British Sociology. Perhaps problematically, Digital Sociology shares a stream with STS at the conference. We are taking this marriage of convenience as an opportunity for anyone interested in the future of Digital Sociology and STS to get together and discuss the following questions:

Why do we need Digital Sociology when we have STS?

What are their affinities and disaffinties?

Are digital methods and digital ontologies transformative for STS?

What distinguishes Digital Sociology from all the other disciplines that claim to study the relationship between society and social media, the Internet, the Web and digital data?

What use is the concept of Digital Sociology?

How can we join forces across institutions to progress the project of Digital Sociology?

To help us address these questions and similar questions that may arise on the day we are very pleased to have presentations from:

Professor Susan Halford, Director of Southampton University’s Web Science Institute; Professor Will Housley, Vincent Wright Chair, Sciences Po & University of Cardiff; Dr Mark Carrigan, Research Fellow in the Centre for Social Ontology at the University of Warwick and Digital Fellow at The Sociological Review; Dr Karen Gregory and Dr Kate Orton-Johnson Lecturers in Digital Sociology at the University of Edinburgh; Kate-and Dr Des Fitzgerald Lecturer in Sociology from the University of Cardiff (with more speakers to be confirmed).

Each speaker will talk for around 10-15 minutes before we open-up the discussion to the floor. If you have any thoughts on those questions above or would like to get involved in the study groups please come along. It’s our intention to solicit your input for a plan of action. The meeting will be followed by a free drinks reception.

Spaces are very limited please reserve your place as soon as possible.

Booking your place

Booking is essential. Venue numbers are restricted and it is advisable to book early.

Registration Fees: BSA member £10 / Non-BSA member £15

Register Online at


For administration enquiries, please contact events@britsoc.org.uk

Ethics Case Study: Social Machines

What are social machines, how do they differ from social media and what new sociological phenomena do they represent?

Back to Case Studies

Back to Ethics Home

Networked digital technologies and devices are now ubiquitous in many societies, providing new channels through which individuals and communities can connect, share information, co-create solutions, distribute tasks, support one another, play and socialise. While online groups and social media are now familiar concepts, and have been the subject of much sociological research, an arguably new phenomenon has emerged which bears closer scrutiny as part of the broader Digital Society research agenda. This has been characterised as the Social Machine. The scope and boundaries of this concept are still being defined and taxonomies for describing and differentiating social machines are evolving. In essence, however, the term ‘social machines’ represents a set of unique socio-technical systems whose existence and functionality depend on a synergistic blend of human and computational ‘engineering’.

Social machines are conceptually related to, but qualitatively different from, social media, information and communication channels or platforms, and the social web, a broader term describing web-mediated social interactions. It is closely associated with the concepts of Collective Intelligence, Distributed Computing and Crowdsourcing, which rely on the effort and cognition of large numbers of individuals, mediated by digital systems, to generate information or solve problems that would be impossible for computers or people to do alone. Inevitably the term has also become associated with the Big Data movement, particularly in relation to the mining of large corpuses of social media and open data.

Social machines appear when other ingredients of sociality are added; for example, the EyeWire project – involving massive numbers of distributed ‘citizen scientists’ examining digital images of brain tissue to find and mark-up cancer cells, has a sociality layer, in the form of an entertaining and competitive gaming format and a community support forum. Likewise, the crowdsourcing platform Ushahidi builds new knowledge (annotated maps) gathered from objective (location) and socially derived or curated data (e.g. outbreaks of violence or disease) and, like other ICT for Governance innovations, was designed to leverage societal power as a catalyst for change. Another example is the ReCAPTCHA system, which crowdsources human judgement by asking service users to type the letters they see in distorted image files in order to determine whether they are humans or computers bots. These behavioural data, in turn, feed a machine learning algorithm that incrementally improves the quality of automated text conversion software for digitising books (most users are unaware of this).

Ethical Issues Presented by Social Machines

Social machines pose a number of ethical and societal challenges. In his original vision for social machines from his book Weaving the Web, Tim Berners-Lee argued that social machines on the web would release “people [to] do the creative work and machines [to] do the administration”. While this has happened in some cases, in others the reverse is true. Indeed many intentional crowdsourcing applications involve humans doing the dull, repetitive tasks while the machines do the creative work, raising issues for trust and equity. Unintentional crowdsourcing takes this one step further, such as with facial recognition bots integrated into social software, or online professional collaboration tools, where users become both the data and the first-line data processors (through their choices), feeding predictive algorithms which may then curtail their options in the interests of greater ‘precision’ and ‘efficiency’.

In the following section, we look at one cluster of social machines which are themselves used to study social machines, the Web Observatory, as developed and researched within the UK EPSRC project SOCIAM (The Theory and Practice of Social Machines).

Example: The Web Observatory

The Global Web Observatory is a research tool for harvesting, organizing, archiving and distributing data about the web, in linked, geographically-distributed and autonomously-managed nodes. The primary role of the nodes is to manage catalogues of resources about data (meta-data) and software apps that enable these data to be analysed and visualised, both retrospectively and in real-time. The catalogues may describe open data, research datasets, or corpuses of social media data available free or at a charge. Individual nodes often contain their own research datasets, although typically they act as intermediaries between the originating organisation and researchers wishing to undertake web analytics. Individual nodes contribute their catalogues, datasets, and apps to the master catalogue maintained by the Global Web Observatory, which mediates research involving each of the nodes. Such heterogeneous, distributed (‘broad’) data is a sine qua non of social machines research, yet its collection and aggregation can be ethically challenging.

The Web Observatory passively monitors open streams of web data, rather than seeking to modify these data or influence the web, but although it is not interventionist in the way that some other social machines are, it still raises important questions about the responsibilities and ethical obligations of observers and data holders. Today, Web Observatories operate under the tacit assumption that all data sources have been ethically pre-screened by the organisations releasing them, but whether this is tenable in the long term, at scale, and in light of new Data Protection regulations, is an open question.

At its current state of development, the Web Observatory has a light touch ethical regime premised on good faith participation, but as it matures, the infrastructure is likely to incorporate techniques or formalisms to negotiate and verify the ethical commitments of participating data controllers. Following the lead of administrative and medical data linkage initiatives, a proportionate and principles-based approach is likely to be most successful. The standards expected for participation in the Global Web Observatory also deserve extension from data and systems interoperability, to interoperable ethics and governance, and work in this area is ongoing.

The Web Observatory, as a global resource, is a work in progress, and will need to respond quickly to such issues as they arise. Furthermore, as a decentralised network of autonomous nodes, whose governance is distributed institutionally and geographically, jurisdictions and cultural assumptions will vary across nodes. Attempting to centralise the ethical discourse surrounding a global distributed network such as this may itself prove ethically problematic, but responsible leadership, shared high level ethical principles, supported by a system of distributed and collaborative governance (ironically, one of the key benefits of social machines), will help to manage these challenges in a changing environment.


This work is supported by SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/2 and comprises the Universities of Oxford, Southampton, and Edinburgh.

#digitalsociology @ #AoIR2016

see AoIR program for details

Levelling the socio-economic playing field with the Internet? A case study in how (not) to help disadvantaged young people thrive online. 

Numerous studies in academic research highlight the significant differences in the ways that young people access, use and engage with the Internet and the implications it has in their lives (boyd, 2014; Livingstone & Helsper, 2007). In contrast to the rhetoric around ‘digital youth’ that suggests young people are always connected, and despite the move towards richer and more complex models of digital inclusion that include skills and level of engagement (Hargittai, 2010; Hargittai & Shaw, 2014), quality of access (as measured by choice of device and network, and degree of personalisation) remains a crucial problem. While the majority of young people have some form of access to the Internet, for some this access is sporadic, dependent on credit on their phones or access to a library or another public setting. Rich qualitative data in a variety of countries have shown how such limited forms of access can create difficulties for some of these young people as access to Internet becomes essential for socialising, accessing public services, saving money, and learning at school (Robinson, 2009).

This presentation will report on a two-year initiative in one area of the UK where it was estimated that around 10% of young people aged 14 did not have access to an Internet connection and laptop or pc at home. In response, the local council, three state schools, and an ISP collaborated to provide thirty of these disconnected young people (and their families) with a free laptop and free access to the Internet for two years to raise educational attainment and improve employment prospects for these individuals by school leaving age.

We will chart the highs and lows of the initiative. This initiative had a fair share of ‘success’. In a few cases, access to the Internet has helped the young people and their families reconnect with relatives abroad, save money on phone calls and consumables, and access vital health services. Some parents have told us about their children using the Internet to extend their learning, and look for college places or apprenticeships. And the scheme, by providing an alternative space to ‘hang-out’ within cramped living conditions, has helped harmonise familial relations.

But the project also has had its ‘failures’ where significant amounts of good will and effort led to limited meaningful change for a number of the families involved. Through in depth analysis of observations from 15 school visits, and 40 interviews with students, parents and teachers and other stakeholders this presentation will highlight the basic tension in this and other initiatives which summon, in varied ways, a tacitly accepted ideological agenda that cannot straightforwardly translate into benefits for the young people and families involved. In sum, we ask:

In what ways do political and economic forces influence the ‘success’ of a digital inclusion scheme?

We show that the hope of such schemes, that if sufficiently empowered, incentivised, and aspirational the disadvantaged can use access to technology to transform or transcend what Bourdieu (1992) calls their “class of conditions” (p53), is largely misplaced. In microcosm, the initiative demonstrates how a neoliberalist mind-set that is increasingly shaping the cultures and behaviours of our service providers and schools cannot solve the problems it creates. While in this initiative we have seen significant amounts of goodwill from all parties, both private and public, this often does not convert into meaningful change for the young people and families involved. As Foucault (2010) notes, neoliberalism’s project is “the overall exercise of political power modelled on the principles of a market economy” ” (p131). Moreover, “the only ‘true’ aims of social policy for neoliberalism can be economic growth and privatisation; thus the multiplication of the ‘enterprise’ form within the social body” (p148). In the home access initiative this was apparent, as the mentality to (both in schools and the private ISP) to prioritise work that is documented, measured, and audited, led to practices and support gaps that unintentionally disadvantaged the young people the initiative was designed to support. This was particularly the case at times when unanticipated contingencies and problems occurred.

Drawing on this rich case study data from the 30 families to critically assess the cultures and practices of the institutions that govern their lives, we will demonstrate how the challenges and realities of this initiative can be generalised to other well-intended schemes to address digital and social inequality and highlight the complexity of ‘levelling the playing field’. We aim to show that even in wealthy post-industrial economies the global networked society is not “simply a fact – that is, as something that is just given and therefore inevitable: it as a choice, a choice made by some and working in the interest of some” (Biesta, 2013, p734). The home access scheme exposes the tacit logic of the power structures that shape this choice. We explore the different “mentalities of government” (Dean,1999, p16) that produce the institutional regimes, knowledges, practices, and procedures that are “structured, internalised and normalised to exercise power over and through certain sectors of society” (Wyn & White 1997, p133), which, in this case, meant some of families inadvertedly became points of power’s application when they believed they were being helped.


Biesta, G. (2013). Responsive or responsible? Democratic education for the global networked society. Policy Futures in Education, 11(6), 733–744. http://doi.org/10.2304/pfie.2013.11.6.733

Bourdieu, P. (1992). The Logic of Practice. Studies in Philosophy and Education. Stanford University Press.

Boyd, D. (2014). It’s Complicated. Yale University Press.

Dean, M. (1999). Governmentality: Power and Rule in Modern Society. London: Sage.

Foucault, M. (1978). The Birth of Biopolitics: Lectures at the Collège de France, 1978-1979. On Neo-Liberal Governmentality. Palgrave Macmillan.

Hargittai, E. (2010). Digital Na(t)ives? Variation in Internet Skills and Uses among Members of the “Net Generation.” Sociological Inquiry, 80(1), 92–113. http://doi.org/10.1111/j.1475-682X.2009.00317.x

Hargittai, E., & Shaw, A. (2014). Mind the skills gap: the role of Internet know-how and gender in differentiated contributions to Wikipedia. Information, Communication & Society, 18(4), 424–442. http://doi.org/10.1080/1369118X.2014.957711

Livingstone, S., & Helsper, E. (2007). Gradations in digital inclusion: children, young people and the digital divide. New Media and Society, 9 (4 ), 671–696. http://doi.org/10.1177/1461444807080335

Robinson, L. (2009). a Taste for the Necessary. Information, Communication & Society, 12(4), 488–507. http://doi.org/10.1080/13691180902857678

Wyn, J., & White, R. (1997). Rethinking Youth. St Leonards: Allen & Unwin.

Draft BSA Guidelines for Digital Research: Case Study Dilemmas in Conducting Social Media Research in the field of Crime and Security

Back to Case Studies

By Matthew L Williams and Pete Burnap

Directors Social Data Science Lab, Cardiff University

A principal ethical consideration in most learned society guidelines on digital social research is to ensure the maximum benefit from findings whilst minimizing the risk of actual or potential harm (interpreted as physical or psychological harm, including discomfort, stress and reputational risk). All groups involved in the research, including social media users, commercial platforms and researchers, should be protected throughout the lifecycle of the project, from inception to data archiving. Users are often the primary concern given their vulnerability in the process. Potential for harm in social media research increases when sensitive data are collected. These data include personal demographic information (such as ethnicity and sexual orientation), information on associations (such as memberships to particular groups or links to other individuals known to belong to such groups) and communications of an overly personal or harmful nature (such as details on morally ambiguous or illegal activity and expressions of extreme opinion). These forms of sensitive information abound on social media networks. In some cases such information is knowingly placed online (whether or not the user is fully aware of who has access to this information). In other cases sensitive information is not knowingly created by users – this can often occur in cases of association between users (not everything can be known about another user before connecting, nor can changes in affiliation be monitored on a routine basis). This information can come to light through the process of analysis, visualization (of networks) and representation of social media data by researchers (Rupert 2015).

Most social media research projects are likely to encounter only the first type of sensitive information. This is certainly the case where topics focus on mundane social activities online. However, those projects that take as their focus behaviors that have been deemed problematic risk encountering multiple forms of sensitive information. Recent RCUK and government funded projects that have taken as their focus cyberhate following terrorist events (Burnap et al. 2014, Williams & Burnap 2015, Burnap & Williams 2015, Burnap & Williams 2016), the spread of racial tension online (Burnap et al 2015), the estimating offline crime patterns using online signals (Williams & Burnap 2016) and suicidal ideation (Scourfield et al. 2016) have encountered all forms of sensitive information outlined above. Here we take the example of cyberhate (Burnap et al. 2014, Williams & Burnap 2015, Burnap & Williams 2015, Burnap & Williams 2016) and provide an overview of our ethical decision-making process in sensitive social media research. The motivation for the ESRC and Google funded project stemmed from the increasing use of social media to communicate highly emotive reactions to events, such as terrorist attacks. The project’s objectives were to i) monitor hateful responses on social media following a series of events; ii) profile hateful social media networks; iii) link hateful content with other data, such as Google search terms and offline press; iv) model hateful information flows to identify enabling and inhibiting factors; and v) study forms of counter speech. The project drew upon both computational and social science research techniques. We used the COSMOS platform[1] to collect and visualise Twitter reactions to the murder of Lee Rigby in Woolwich. Our first ethical dilemma was therefore related to consent: (i) as researchers should we obtain consent from all users in the social media dataset? As our intention was to conduct only quantitative analysis and aggregate level visualization that retained the anonymity of users we were satisfied that the consent provided to Twitter in their Terms of Service satisfied our criteria for minimizing harm (see final paragraph for discussion of consent in qualitative social media research).

The next stage of the project required the use of machine learning algorithms to classify hateful content and to build networks of users. Automated text classification of social media content performs well when conducted on datasets around specific events. However, their accuracy decreases beyond the events around which they were developed due to changes in language use (Burnap & Williams 2015). Social network graph algorithms operate differently from classification algorithms, but they are also open to misrepresentation if there are data quality issues (such as missing data due to poor operationalisation of collection search terms). Reliance on algorithms presented the second ethical dilemma: (ii) how should researchers develop, use and reuse algorithm driven text classification and social network graph processes that have the consequence of labeling content and users as hateful (and in some cases potentially criminal)? Where text classification techniques are necessitated by the scale and speed of the data (e.g. classification can be performed as the data are collected in real-time), researchers must ensure the algorithm performs well (i.e. minimizing the number of false positives) for the event under study in terms of established text classification standards.[2] Furthermore, researchers have a responsibility to ensure the continuing effectiveness of the classification algorithm if there is an intention to use it beyond the event that led to its design. High-profile failures of big data, such as the inability to predict the US housing bubble in 2008 and the spread of influenza across the United States using Google search terms, have resulted in many questioning the power and longevity of algorithms (Lazer et al. 2014). Algorithms therefore need to be routinely tested for effectiveness and may need to be ‘refreshed’ with new human input and training data if false positives are to be minimized, avoiding the mislabeling of content and users. Where social network graphs indicate users are associated with particular groups, which if made public may cause distress or reputational risk, researchers must question the quality of the data used to generate the association (as would be expected in all scientific reporting) and make careful decisions on whether to publish such content. Where such information is published, every effort must be made to maintain the anonymity of users in the graph, including efforts to reduce the likelihood of deductive disclosure (Stewart and Williams 2005).

Following on from text classification, statistical model building was utilized to predict hateful information propagation around the Woolwich terrorist attack. These models identified which factors, such as type of user, network capital, and type of language used (such as counter-speech) enabled and inhibited hateful information flows. This presented the third ethical dilemma: (iii) is the process of identifying factors that stem the spread of online hate speech a universally accepted goal? This may seem like a redundant question to citizens of many European countries, where some forms hate and antagonistic speech are criminalised, including the UK. However, in the US hate speech is not criminalized, and online communications are protected by the first amendment. Therefore, project funders that are located in the US (such as Google) may not wish to be associated with research that infringes upon such protections. The researcher therefore must use their moral compass to balance these jurisdictional prerogatives with the pursuit of scientific objectivity.

Representation of our findings presented the fourth ethical dilemma: (iv) is it possible to present the content of hateful and counter speech in tweets in publication? Anonymous publication of actual examples of hateful tweets is precluded under Twitter Terms of Service. Twitter Terms of Service forbid the anonymization of tweet content (screen-name must always accompany tweet content), meaning that ethically, informed consent should be sought from each tweeter to quote their post in research outputs. However, this is impractical in most big data projects given the number of posts generated and the difficulty in establishing contact (a direct private message can only be sent on Twitter if both parties follow each other). Therefore, it is not ethical to directly quote tweets that identify individuals without prior consent. Furthermore, Twitter Terms of Service also requires that authors honour any future changes to user content, including deletion. As academic papers cannot be edited continuously post publication, this condition further complicates direct quotation (needless to mention the burden of checking content changes on a regular basis). However researchers should not conclude that conventional representation of qualitative data in social media research is precluded due to these Terms of Service. As in conventional qualitative research, researchers can make efforts to gain informed consent from a limited number of posters if verbatim examples of text are required (although posters must understand that anonymity is not possible in these cases given tweet text is searchable). In cases where consent is not provided, Markham (2012) suggests some innovative methods for protecting privacy in qualitative social media research. Acknowledging that traditional methods for protecting privacy by hiding or anonymising data no longer suffice in digital settings that are archived and searchable, Markham advocates bricolage-style reconfiguration of original data that represents the intended meaning of interactions. While this may be suitable for general thematic analysis, it may not satisfy the needs of more fine-grained approaches, such as conversation and discourse analysis.

Social Data Science Lab Risk Assessment and Ethical Principles

Social research ethics are at the core of the Social Data Science Lab’s programme of work. Recent work shows how users of social media platforms are uneasy about their posts being collected without their explicit consent (NatCen 2014, Williams 2015). However, many social media terms of service specifically state that users’ data that are public will be made available to third parties, and by accepting these terms users legally consent to this. In the Lab’s research programme we interpret and engage with these terms of service through the lens of social science research which often implies a higher ethical standard than provided in legal accounts of the permissible use of these kinds of data. The topic of ethics in social media research has been a key focus of ours and formed a primary research question in our first ESRC Digital Social Research Demonstrator Grant. Ethics as a topic continues to be embedded in our follow-on grants and we are continuously reflecting upon our practice as social and computational researchers. We are acutely aware of the key ethical issues of harm, informed consent, the invasion of privacy and deception as they relate to the collection, analysis, visualization and dissemination of social media data. Below we detail our risk assessment and ethical principles that have been adopted by several social science several research ethics committees in the UK.

Risk Assessment

Low risk – Tweet is from official/institutional account: Publish without seeking consent in most cases.

High risk – Tweets are from individual users and contain mundane or sensitive information (overly personal, abusive etc.). Must contact the user (direct message/@mention/email) and ask their permission to publish. Only publish if consent is received.

High risk – Tweet has been deleted precluding publication under Twitter Developer Agreement/Policy.

High Risk – Tweet is from a deleted account meaning it has been deleted precluding publication under Twitter Developer Agreement/Policy.

Ethical Principles

  • We abide by the Economic and Social research Council’s Framework for Research Ethics
  • All projects undergo Research Ethics Committee Review
  • Any significant changes to research design following Research Ethics Review approval are reported back to the Committee for re-approval
  • We abide by Twitter’s Developer Policy and Developer Agreement
  • We abide by the UK Data Protection Act 1998
  • We only use social media data for academic research purposes
  • We keep all information gathered on individual Twitter users confidential on secure password protected servers
  • We maintain the anonymity of all individual Twitter users in our research
  • We only publish in research outputs aggregate information based on data derived legally and ethically from the Twitter APIs
  • In research outputs we never directly quote individualTwitter users without their informed consent.  Where informed consent cannot be obtained we represent the content of tweets in aggregate form (e.g. topic clustering, wordclouds) and themes (decontextualised examples and descriptions of the meaning or tone of tweet content).  These forms of representation preclude the identification of individual Twitter users, preserving anonymity and confidentiality
  • In research outputs we do directly quote from Twitter accounts maintained by public organisations (e.g. government departments, law enforcement, local authorities) without seeking prior informed consent
  • We never share data gathered from Twitter APIs for our research outside of the COSMOS project team
  • We destroy all personal data if it is no longer to be used for research purposes

Funding: This work was supported by five Economic and Social Research Council grants: ‘Digital Social Research Tools, Tension Indicators and Safer Communities: a Demonstration of the Cardiff Online Social Media ObServatory (COSMOS)’, Digital Social Research Demonstrator Programme (Grant Reference: ES/J009903/1), ‘Hate Speech and Social Media: Understanding Users, Networks and Information Flows’, Google Data Analytics Research Programme (Grant Reference: ES/K008013/1), ‘Social Media and Prediction: Crime Sensing, Data Integration and Statistical Modeling’, National Centre for Research Methods (Grant Reference: ES/F035098/1/512589112), ‘Digital Wildfire: (Mis)information Flows, Propagation and Responsible Governance’, Global Uncertainties Ethics and Rights in Security Programme (Grant Reference: ES/L013398/1), and ‘Public Perceptions of the UK Food System: Public Understanding and Engagement, and the Impact of Crises and Scares’, Understanding the Challenges of the Food System Programme (Grant Reference: ES/M003329/1).


Burnap, P, Williams, M. L. & Sloan, L. (2014) ‘Tweeting the terror: modelling the social media reaction to the Woolwich terrorist attack’, Social Network Analysis and Mining, 4: 206.

Burnap, P. & Williams, M. L. (2015) ‘Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making’, Policy & Internet.

Burnap, P. and Williams, M. L. (2016) ‘Us and them: identifying cyber hate on Twitter across multiple protected characteristics’, EPJ Data Science 5, article number: 11. (10.1140/epjds/s13688-016-0072-6)

Burnap, P., Williams, M. L. Rana, O., Edwards, A., Avis, N., Morgan, J., Housley, W. and Sloan, L.. (2013) Detecting tension in online communities with computational Twitter analysisTechnological Forecasting & Social Change.

Lazer, D., Kennedy, R., King, G. and Vespignani, A. (2014), ‘The Parable of Google Flu: Traps in Big Data Analysis’, Sciecne, 343: 1203–5.

Markham, A. (2012) ‘Fabrication as ethical practice: Qualitative inquiry in ambiguous internet contexts’, Information, Communication and Society, 15(3): 334-353.

NatCen (2014) Research Using Social Media: Users’ Views, London: Natcen.

Ruppert, E. (2015) ‘Who Owns Big Data’, Discover Society, 23.

Scourfield, Jonathan Bryn, Colombo, Gualtiero, Burnap, Peter, Jacob, Nina Katherine, Evans, Rhiannon Emily, Zhang, Mei, Williams, Matthew Leighton, Housley, William and Edwards, Adam Michael (2016) The response in Twitter to an assisted suicide in a television soap operaCrisis: The Journal of Crisis Intervention and Suicide Prevention

Stewart, K. F. and Williams, M. L. (2005) ‘Researching online populations: The use of online focus groups for social research’, Qualitative Research 5(4): 395-416.

van Rijsbergen, C. J. (1979) Information Retrieval (2nd ed.), London: Butterworth.

Williams, M. L. and Burnap, P. (2015) ‘Crime Sensing with Big Data: The Affordances and Limitations of using Open Source Communications to Estimate Crime Patterns’, British Journal of Criminology. Online Advance Access.

Williams, M. L. (2015), ‘Towards an Ethical Framework for Using Social Media Data in Social Research’, presented at Social Research Association Workshop, Institute of Education, UCL, 15 June 2015.

Williams, M. L. and Burnap, P. (2015) ‘Cyberhate on social media in the aftermath of Woolwich: A case study in computational criminology and big data’, British Journal of Criminology. 56(2): 211-238

Williams, M. L., Edwards, A., Housley, W., Burnap, P., Rana, O., Avis, N., Morgan, J., and Sloan, L. (2013) ‘Policing cyber-neighbourhoods: Tension monitoring and social media networks’, Policing and Society 23(4): 461-481.

[1] http://socialdatalab.net/software

[2] Established measures include: precision (the fraction of retrieved tweets that are relevant to the search – i.e. for each class how many of the retrieved tweets were of that class); recall (fraction of tweets that are relevant to the search that are successfully retrieved – i.e. for each class how many tweets coded as that class were retrieved); F-Measure (a harmonized mean of precision and recall); and Accuracy (the total correctly classified tweets normalized by the total number of tweets). Results of 0.75 and above (on a scale of 0-1)s in each measure are considered outstanding (van Rijsbergen, 1979).


Draft BSA Guidelines for Digital Research: Case Study Mixed Methods Case Study

Back to Case Studies

This is an application form to an ethics committee to begin researching young people’s search and evaluation practices online using mixed methods.


Please note:

  • You must not begin your study until ethical approval has been obtained.
  • You must complete a risk assessment form prior to commencing your study.
  • It is your responsibility to follow the University’s Ethics Policy and any relevant academic or professional guidelines in the conduct of your study. This includes providing appropriate information sheets and consent forms, and ensuring confidentiality in the storage and use of data.
  • It is also your responsibility to provide full and accurate information in completing this form.
  1. Name(s):
  2. Current Position PhD Student
  3. Contact Details:




  1. Is your study being conducted as part of an education qualification?

            Yes                           No       

  1. If Yes, please give the name of your supervisor
  2. Title of your project:
  3. What are the proposed start and end dates of your study?

            October 2012 to June 2013

  1. Describe the rationale, study aims and the relevant research questions of your study

To find out:

When and why young people use the web to search for information.

How young people search for information; for example their choice of search engine and search query.

How young people judge credibility by discriminating between various sources of information

Are young people persuaded by contested information they find online?

If a young person’s socioeconomic status and education has any influence on these questions.

  1. Describe the design of your study

I have recruited two institutions to take part in this study. I will be working with a member of the management team at each college who also teaches. They have volunteered their students for this study and integrated my research methods with their student’s learning objectives.

These institutions are distinguished by their student’s socioeconomic status and journey through our education system.

Stage 1: Group interviews

Previous research in this area suggests young people need to be motivated by their interest in a topic before they research it thoroughly online. If I give them topics in which they have no interest, then they will perform perfunctory searches. The purpose of the group interviews is to discover which topics interest the students. In groups of 15 I will ask the students:

  • When and why do you use the web for search information?
  • What are the issues, topics, and questions would you look to the web to resolve?

During the interview, I will suggest examples and ask the students if they would use to the web to investigate them. For instance, is global warming man-made?

(See interview schedule)

Group selection will depend on:

The members of staff and students from each institution who are willing to participate.

Which students under the age of 18 who, having volunteered, have obtained consent from their parents to participate.

Stage 2: Collaborative writing project

The questions produced during the group interviews will be used for a collaborative writing project.

This will involve 3 sub stages:

  1. The students will be asked to record their responses individually (to the issues discussed during stage 1) in Word before any online research has taken place. These responses will be uploaded to a secure, password protected server at the University.
  2. Next, the students will be asked to construct responses again individually in Word, but this time using the web as resource. These responses will be uploaded to a secure, password protected server at the University. The search queries will be captured for analysis via a proxy server. When students use the web at each college their search history (the addresses of web pages they visited) is stored on a central computer on the college’s network called a web server. I will set-up a proxy server that will perform this function for students participating in my study. The proxy server will only capture data (search logs) that each institution captures already on its web servers but just for the students (or rather their machines) participating in the study.
  3. For the final stage the students will be asked to integrate their individual responses written during stage 3b into a wiki that reflects a group consensus on each topics. This stage will be videoed to observe the deliberations and interactions between the students during this process.

The wiki will be hosted by the University.

A similar wiki can be seen here (removed).

The students asked to create the wiki will be given pseudonyms for log-ins and explicit instructions not to identify themselves or the institution at which they study. Only I, as the wiki’s administrator, will be privy to this information.

The wiki will be written at each institution within the student’s normal timetable. It will be locked by me, as administrator, for editing outside these hours to prevent any contamination or abuse.

Each wiki page will have a discussion page within which the students will be encouraged to discuss and justify their choice of source.

Observation and Recording of Collaborative writing project

As well as asking students to document their deliberations, while they write the wiki, I will observe and video the project in progress and record my discussions with them about their choice of sources and credibility decisions.

The video is intended as an objective ‘memory’ of the process. I need to see who spoke to whom and when and compare this to the timeline of edits on the Wiki. During the debrief interviews I can also refer to the videos.

Stage 3: Debrief individual interviews

The dual purpose of the debrief interviews to assess each volunteer’s experience after they have time to reflect on the project and capture any thoughts or processes that were not revealed during the observations.

  1. Who are the research participants?

Approximately 30 post-secondary school students age 16-19 and specific teachers who have agreed to take part in the study.

  1. If you are going to analyse secondary data, from where are you obtaining it?

From the institutions at which the students are attending. For example student fees and any anonymised demographic data each institution can provide.

  1. If you are collecting primary data, how will you identify and approach the participants to recruit them to your study?

Recruitment will result from a process during which my point-of-contact at each institution will volunteer classes to participate. I will ask all the members of these classes if they are happy and willing to participate. If any of the students are aged under 18 I will seek parental consent before proceeding.

  1. Will participants be taking part in your study without their knowledge and consent at the time (e.g. covert observation of people)? If yes, please explain why this is necessary.


  1. If you answered ‘no’ to question 13, how will you obtain the consent of participants?

For each stage of the study I will seek the written consent of a member of each institution with the appropriate level of authority to do so, the teachers involved in the study, the students participating in the study and if necessary their parents (see consent forms).

  1. Is there any reason to believe participants may not be able to give full informed consent? If yes, what steps do you propose to take to safeguard their interests?


  1. If participants are under the responsibility or care of others (such as parents/carers, teachers or medical staff) what plans do you have to obtain permission to approach the participants to take part in the study?

For participating under-eighteens I will seek parental consent by writing to each participant’s parent (see parental consent form).

  1. Describe what participation in your study will involve for study participants. Please attach copies of any questionnaires and/or interview schedules and/or observation topic list to be used

Participation for young people would involve:

An hour long interview with approximately 15 of their peers to discuss how and why they use the web to find information. This is an opportunity to discuss topics or arguments they would use the web to help resolve.

A five hour collaborative writing project to be done within normal college hours within which they discuss and document sources that support their arguments.

A fifteen minute individual debrief interview to discuss their participation in the project.

  1. How will you make it clear to participants that they may withdraw consent to participate at any point during the research without penalty?

On the information sheets I will inform participants that there will be no repercussions if, at any time they wish to withdraw from the study by speaking to me; during the group interviews, during observations of the collaborative writing project or specifically from the debrief interviews. I will also give the participants my university email address so they can withdraw at any time via email. In case they feel uncomfortable addressing me, participants will able to indirectly withdraw from the study by informing a member of staff at their institution or parent or guardian.

  1. Detail any possible distress, discomfort, inconvenience or other adverse effects the participants may experience, including after the study, and you will deal with this.

Stage 1: Group Interviews

Although I will make every effort to avoid sensitive topics for the wiki, I am unable to predict how individual students may react to all possible topics. I will inform the students, from the outset, that if they find a topic problematic they should inform me or a member of staff at the intuition so I can withdraw the topic and/or the student from the study. For example, the students may want to research the link between mental illness and marijuana use and an individual in the group may have personal experience of this.

The interviews will be digitally recorded, removed from the recording device and transferred to a secure, password protected server at the University.

During the transcription and analysis of the recordings all the participants will be referred to by pseudonyms.

At all times during the study a member of staff from the institution will be present or in earshot.

Stage 2: Collaborative Writing Project

It is possible the participants may abuse the anonymous collaborative writing space with harmful behaviour such as bullying, flaming and trolling. I will closely monitor the wiki for such behaviour. As the wiki’s administrator I will have access to participant’s real identities. If any participant is behaving inappropriately I will use this access to inform the participating institution’s member of staff of the participant’s identity and negotiate appropriate action (for example issue a warning and if necessary remove any offenders from the study).

A member of staff from the institution will be present or in audible range.

The writing of the wiki will be video recorded. It is possible individual or all the students will be become uncomfortable with this at which point I will cease recording.

Stage 3: Individual Interviews

These will be individual interviews it is therefore possible participants will be uncomfortable in a one-to- one with a relative stranger.

I am an experienced teacher. I will use any opportunity to reassure the students and develop a working relationship prior to the individual interviews.

The interviews will be held in an open space or a room with an open door. A member of staff will be present or in earshot.

  1. How will you maintain participant anonymity and confidentiality in collecting, analysing and writing up your data?

The institutions will be given pseudonyms. The participants will be asked to create their own usernames. These usernames will be sanctioned by the member of staff representing the institution for appropriateness.

The participants will be referred to throughout by their usernames. If a username can be interpreted in such a way that can lead to a user’s real identity I will provide an alternative.

The search logs on the proxy server will only record searches performed by the machines and not the user. I will only be able to identify who searched what when by referring to the video.

  1. How will you store your data securely during and after the study?

The digital recordings of the group interviews, the offline discussions during observations and the debrief interviews will be removed from the recording device and uploaded to a password protected secure server hosted by the University.

The proxy server will be my laptop. Immediately after each session, the data files will be transferred to password protected secure server hosted by the University then removed from my laptop.

The videos will be recorded on a tape. Immediately after the recordings, the tape’s content will be uploaded to a password protected secure server hosted by the University of Southampton then deleted.

The wiki and all its data will be password protected. Only registered users will be able to view or edit its content. The wiki and its data cache will be encrypted and stored securely on a University server.

  1. Describe any plans you have for feeding back the findings of the study to participants.

I will publish the study’s findings on ePrints and distribute the url to all participants by letter addressed to their institution.

  1. What are the main ethical issues raised by your research and how do you intend to manage these?

The main ethical issues are:

  • I will be working with under-18s.
  • I may discuss of potentially ethically sensitive topics
  • I will be using primary data i.e. search logs, audio and video recordings.
  • The abuse of the wiki and its discussion pages.

Strategies to manage these risks are described above.

  1. Please outline any other information you feel may be relevant to this submission.

I am a former secondary school teacher. I am CRB checked. My training and experience will help identify and manage many of the risks identified above.

The use of search logs and video is unprecedented in this field of research and is therefore important to the overall thesis. For the searches I need a record of what the students searched and when; one I can use to discuss their choices during the interviews. During deliberations that influence the wiki’s content, I need to see who talked to whom and when. The video will be an objective record of how knowledge is socially constructed which I can refer to when interviewing the students.