New Matilda – The Tangled Web filtering forum

Next week I will be speaking at a forum organised by New Matilda, "The Tangled Web: Beyond an Internet Filter". The abstract reads:

The Federal Government's proposal to block internet sites with a mandatory filter has drawn overwhelming opposition from voices across politics and civil society. So what are the real questions for policy-makers?

These forums explore the ethical, social and political questions raised by government regulation of the internet. With the growing intersection between technology, politics and media, how do existing and proposed classification regimes measure up?

Is filtering inevitable? Or are there better ways to regulate the world wide web?


The forum will be chaired by Peter Black, and will include Senator Scott Ludlam, Irene Graham, and myself.

It's a free event on at 6pm, Tuesday 24 March, at QUT. Places are limited, so please RSVP to enquiries(at)newmatilda.com.

I plan to talk about the rule of law in relation to the proposed filtering schemes. This debate has been around for a long time now, and we're finally starting to move past reactionary claims on the one hand that the Internet should never be censored and on the other hand that censorship and classification can and should work the same way online as off. The fundamental debate is a good deal more subtle than these polarised extremes.

We ought to know by now that no topics are strictly out-of-bounds for regulation. Roberto Unger showed us that one of the most dangerous things we can do is constrain ourselves by false necessity. The decision of whether and how to censor the internet is a social and political decision, and it doesn't help those opposed to a mandatory filtering regime to obscure that fact.

As a social and political decision, we should certainly be concerned with the efficacy of the proposed plan and the technical difficulties associated with any mandatory filter. This is something that I think has been covered quite well already.

What I would like to see a bit more of is a critical engagement with the goals of the filtering proponents. It is encouraging to see that we are increasingly engaging in this debate. I particularly liked Colin Jacobs' piece, Cyber-libertarians love their children too. Colin makes a special note of Holly Doel-Mackaway from Save the Children, who argues that filtering is taking attention away from the more pressing problems of violence against children (noting that “there are still often waiting lists for children who are victims of sexual abuse to get counselling and that the counselling received is limited”).

It's good that we're having these conversations, and we should continue to engage rather than polarise the debate. There is a lot of vitriol around, and it is not particularly conducive to identifying and implementing socially desirable policy.

One of the things I would like to discuss in this debate is the role of the rule of law in internet censorship. E. P. Thompson demonstrated that law not only reflects the interests of power, but it can also impose a limit on the exercise of power in society. The value of the rule of law is in the way in which the exercise of power is constrained by society. The Labor Government's proposal for mandatory internet filtering suffers from some fundamental flaws when analysed from the perspective of the values of the rule of law.

It seems to be accepted that a filtering scheme requires a secret blacklist (due, in the main part, to the ineffectiveness of any filtering regime in actually blocking access). One of the great strengths of our current censorship and classification regime is that it is transparent (decisions are made in public), and there is appropriate due process (decisions are reviewable). This transparency and procedural fairness is not feasible where the list of banned sites must be kept secret.

This lack of accountability is a fundamental problem. Without public or judicial oversight, abuses of power are much less likely to be detected or corrected. I certainly don't want to disparage the work of our Classification Board,1) but it seems too optimistic to suggest that mistakes will never happen, and we have learnt to be too untrusting to accept that exercises of power will always be free of ulterior motives.

The problem is exacerbated if, in order either to keep up with the volatility of prohibited content on the web or to avoid the difficult and expensive problem of classifying websites, we abdicate that responsibility in favour of an external organisation (like the Internet Watch Foundation) for the determination of whether Australians will be permitted to view particular websites. Such a move places the responsibility for administering a highly contentious and potentially very limiting censorship regime with a body with no responsibility to the Australian people – a body not bound by the rule of law.

These moves are inherently dangerous. Whilst ultimate responsibility may rest with a governmental organisation, an extra layer of opaque deliberations will be likely to make it difficult for the public to accept the legitimacy of the filtering scheme.

This really puts us in a difficult situation. In order to avoid some of the large holes in the proposed scheme, we are obliged to keep the blacklist secret. But any secrecy necessarily involves a reduction in oversight, opens the scheme to allegations of illegitimacy, and makes it more difficult to identify and correct both errors and abuses of power.

There are a number of ways out of this tradeoff, but none of them seem particularly encouraging. We could work on the effectiveness of the filter, such that it would not matter if the blacklist leaked, but this is unlikely given the ease of circumvention of current or proposed technology. We could accept that the blacklist will be open, and accept that the role of the filter is to (a) keep the honest honest; or (b) prevent 'accidental' exposure. However, given the lack of evidence that there is a lot of accidental exposure, the benefits of this approach are dubious. We could implement more effective and trustworthy processes to oversee the administration of the blacklist – closed judicial oversight, for example. This approach may be somewhat workable, but we still need to ensure the legitimacy of any oversight mechanisms, and this adds considerable overhead. We could accept the hit to our standards of legitimacy and operate the filter in a closed manner. This tends to be a slippery slope and something I would argue strongly against, but it seems to be the way that the government is leaning currently. Alternatively, we could come to the conclusion that the proposed filter is unworkable, and identify other ways in which we could achieve our policy goals.

I think that by now it's probably obvious that I believe that the proposed filter is not the best way of achieving our goal of protecting children. I think that the potential for abuse is substantial – we have already seen an anti-abortion site added to the blacklist and a discussion forum prohibited from even linking to that content. I also think that it's becoming increasingly clear that the technical deficiencies of the scheme are extremely substantial.

If we're willing to reduce our faith in the values of the rule of law in a trade-off in order to combat child abuse, it would seem desirable at least to ensure that the trade-off is a good one. To sacrifice accountability and due process for a technically ineffective scheme, however, does not seem be a desirable outcome.

[ edit: minor changes to third last paragraph to avoid confusion. ]

1)
Interestingly, the Classification Board is not currently in charge of operating the blacklist, which is compiled by ACMA.