Electronic copy available at:
Criteria for Normative Technology
An essay on the acceptability of ‘code as law’
in light of democratic and constitutional values
Bert-Jaap Koops
Tilburg University, TILT
TILT Law & Technology Working Paper No. 005/2007
Version 0.4
Tilburg University Legal Studies Working Paper No. 007/2007
7 December 2007
This paper can be downloaded without charge from the
Social Science Research Network Electronic Paper Collection
An overview of the TILT Law & Technology Working Paper Series can be found at:
Electronic copy available at:
Electronic copy available at:
Criteria for Normative Technology
An essay on the acceptability of ‘code as law’ in light of democratic and
constitutional values
Bert-Jaap Koops
Technology influences people’s behaviour in significant ways. Increasingly, norms are
intentionally being built-in in technology to steer behaviour, for example, by making it impossible
to copy a dvd or to filter websites with undesirable material. These norms are generally not
developed and designed according to the accepted procedures for law-making, but are made by
a variety of private and public actors and translated into ‘code’. Since normative technology can
influence people’s behaviour equally significantly as laws, this development raises questions
about the acceptability of normative technology. In order to be able to assess the acceptability of
normative technology in light of democratic and constitutional values, in this essay, a systematic
set of criteria is proposed that can be applied to ‘code as law’.
1. Introduction
Technology has always had a certain normative element – it is never neutral. However, since a
decade or two, something is changing. With the advent of information and communication
technologies (ICT) and the Internet, technology is being used more and more intentionally as an
instrument to regulate human behaviour. Notable examples are Digital Rights Management (DRM)
systems (enforcing – or extending – copyright), filtering systems (which block ‘harmful’ content),
Privacy Enhancing Technologies (PETs, which allow citizens the control over personal data that they
are losing in the digital age), and terminator technologies (which prevent genetically modified foods to
multiply, forcing farmers to buy new crops each year). Sometimes, like with PETs, norms are
incorporated in technology to enforce existing legal rules, but in other cases, they are built in to
supplement or extend legal rights, thus setting new norms.
This development means that the normative role of technology is becoming more important, to
such an extent that we may speak of a qualitative difference: technology, increasingly, enforces or
supplements law as an important regulatory instrument. That many such efforts are not very effective
so far – DRM and filtering systems are often easily circumvented, and PETs have in the past decade
not yet surpassed the stage of ‘promising concept’ – does not diminish the potential import of this
finding: however tentatively, technology is increasingly used intentionally to enforce or establish
norms. Technology that sets new norms clearly raises questions about the acceptability of the norms,
but also if technology is used ‘only’ to enforce existing legal norms, its acceptability can be questioned,
since the reduction of ‘ought’ or ‘ought not’ to ‘can’ or ‘cannot’ threatens the flexibility and human
interpretation of norms that are fundamental elements of law in practice.
The topos of ‘code as law’
has been put on the agenda by scholars like Joel Reidenberg and
Lawrence Lessig. It basically refers to the notion that increasingly, technology is intentionally being
used in a normative way, thus influencing people’s behaviour to an ever larger extent. Whereas ‘code
as law’ is an often-used phrase to denote this phenomenon or topos, I shall use the term ‘normative
technology’ to indicate this type of technology itself, i.e., technology with intentionally built-in
mechanisms to influence people’s behaviour.
4 Normative technology is part of the fourth category of
Lessig’s (1999b: 506ff) four modalities of regulation (laws, norms, markets, and architecture):
1 Prof.dr. Bert-Jaap Koops is Professor of Regulation & Technology at TILT – the Tilburg Institute for Law, Technology, and Society,
Tilburg University, the Netherlands.
This paper was written as part of a project on law, technology, and shifting balances of power, funded by the Dutch Organisation
for Scientific Research. I thank the colleagues at TILT, in particular Eva Asscher and Maurice Schellekens, Roger Brownsword,
Bärbel Dorbeck-Jung, Bert van Roermund, Karen Yeung, and the Tilburg Legal Research Master students for their comments on
earlier versions.
‘Code’ here means: computer code. The topos is also phrased sometimes as ‘Code as code’, indicating that West-Coast code
(Silicon Valley software) functions as East-Coast code (Washington, D.C., codified law) (Lessig 1999a: 53).
Lessig (1999a; 1999b) often uses the term ‘code’ (without inverted commas) for the technology itself, but this does not allow
distinguishing between technology (or computer code) at large and technology (or computer code) with intentionally built-in rules.
The latter significantly differs from the former. Technology in itself of course has a regulatory effect on people’s behaviour (it limits
and extends people’s behavioural scope). The novelty of ‘code as law’ is that technology is used intentionally as an instrument to
influence the way people behave, supplementing law as a regulatory instrument. In order to distinguish this particular type of
technology from technology at large, I have not adopted Lessig’s term ‘code’ but instead use ‘normative technology’, the adjective
showing that behaviour-influencing norms are part of the technology itself.
Electronic copy available at:
Electronic copy available at:
‘Code sets these features [conditions on one’s access to areas of cyberspace]; they are features selected
by code writers; they constrain some behavior (for example, electronic eavesdropping) by making other
behavior possible (encryption). They embed certain values, or they make the realization of certain values
‘Code as law’ is viewed sometimes from an optimistic and sometimes from a pessimistic point of view.
Joel Reidenberg (1993; 1998) was one of the first scholars to point out that in the digital age, software
and hardware tend to regulate themselves, or rather, Internet users and developers tend to regulate
themselves through technology. He coined the term Lex Informatica to refer to this development, thus
comparing the newly emerging technology-embedded ‘law’ with the largely bottom-up-developed Lex
Mercatoria of the Middle Ages. Initially viewing this as positive, since traditional legislatures are not fit
– for many reasons – to regulate the Internet by law and hence inviting public authorities to embrace
the emerging Lex Informatica to fill the gap of Internet regulation (Reidenberg 1998), Reidenberg later
turned more pessimistic, noticing the downside of self-regulatory norms being built-in in technology
that bypasses democratic control (cf. Reidenberg 2007).
Lawrence Lessig has been most influential in deepening and disseminating the understanding of
the role that normative technology nowadays plays. In Code and Other Laws of Cyberspace, Lessig
argues, for example in relation to privacy, that ‘the code [technology] has already upset a traditional
balance. It has already changed the control that individuals have over facts about their private lives’
(1999a: 142). Privacy is thus threatened through normative technology, beyond the grasp of
legislatures that try and establish a desirable level of privacy through democratically legitimated laws
(cf. Koops & Leenes 2005). At the same time, Lessig calls for a response to privacy-threatening
technology in the form of PETs. In other words, in Lessig’s view, ‘code’ that disturbs the traditional
balance between privacy and other interests should be checked by ‘code’ that incorporates privacy
values (Lessig 1999a). Whether that is feasible remains to be seen: in practice, PETs are rarely used
on a wide scale. In fact, technology is gradually eroding privacy, and built-in privacy threats play a role
in this process (Koops & Leenes 2005). And privacy is not the only area in which normative
technology threatens democratically and constitutionally established balances between conflicting
interests. Also in the fields of freedom of expression (Lambers 2006) and intellectual property
(Helberger 2006; Reidenberg 2007), ‘code’ is functioning more and more as ‘law’ by intentionally
regulating human behaviour. In short,
‘[c]ode is increasingly being sought as a regulatory mechanism in conjunction with or as an alternative to
law for addressing societal concerns such as crime, privacy, intellectual property protection, and the
revitalization of democratic discourse’ (Kesan & Shah 2004: 279).
This is a significant difference with traditional technology, which in itself of course has a regulatory
effect on people’s behaviour. The novelty of ‘code as law’ is that technology is nowadays being used
intentionally as an instrument to influence the way people behave, supplementing law as a regulatory
instrument. A key difference between ‘code’ and ‘law’ is that normative technology, both in its normenforcing and in its norm-establishing form, influences how people can behave, while law influences
how people should behave. This is why the rise of intentionally normative technology, in contrast to
traditional technology, raises the democratic and constitutional issues that are central to this paper.
2. Research question
While the topos of ‘code as law’ has been researched from different perspectives since the seminal
work of Reidenberg and Lessig, no clear conclusions have emerged so far in academic research on
the acceptability of this development. Scholars regularly offer opinions on this, yet a framework for
assessing ‘code as law’ seems to be lacking. How should rules that are built-in in technology be
assessed, given that technology has special characteristics when it enforces or establishes legal
norms? It is safe to say that a norm that is built-in in technology to regulate people’s behaviour, such
as an anti-copying measure on a cd that is used to enforce legal rights as laid down in copyright law,
is more than just a ‘feature’ of technology: it sets its own standard in absolutely enforcing existing legal
rights or in establishing new rights. This raises the question how the criteria – both procedural and
substantive – that are traditionally applied to laws, apply to norms that are embedded in technology. In
particular, there are concerns that fundamental safeguards of democratic and constitutional values
may not apply fully or perhaps at all to regulation by technology, while the impact on citizens’
behaviour can be equally significant as the impact of legal norms enforced by legal procedures. This
leads to my main research question: Which criteria should be used to assess the acceptability, in light
of democratic and constitutional values, of normative technology?
Electronic copy available at:
This question is part of the larger question how normative technology can or should be assessed.
Since such a large question cannot be addressed in a single essay, I restrict myself to the first step in
assessing ‘code as law’: presenting a set of criteria that can be used as a checklist for assessing the
acceptability of normative technology. A systematic presentation of criteria for normative technology,
which has so far not been undertaken in the literature, can bring the academic debate on ‘code as law’
a step forward, by providing a tool for authors who write about the acceptability of specific cases of
normative technology.
I will start with some thoughts on why normative technology merits assessment and why criteria
relating to democratic and constitutional values are relevant for such an assessment (section 3). I will
use a heuristic method to assemble a set of criteria (section 4) which I will then elaborate and
structure (section 5). I end with a conclusion how the set of criteria could be further refined and tested
(section 6), together with an agenda for further research (section 7).
I should stress that this essay has a hybrid character. The topic and research question are
fundamental and complex, while their treatment is rather casual. I have chosen to present my thoughts
in the form of an essay, as this genre allows addressing a difficult issue in article-length form without
too many ifs and buts. The essay is an attempt to put thoughts on paper in order to invite debate,
reflection, underpinning, elaboration, and testing, which I hope subsequent scholars will embark upon.
3. Why should normative technology be assessed?
A starting point of this essay is that normative technology regulates behaviour in unprecedented ways,
and that, as it is used intentionally as an instrument to influence human behaviour, it should comply
with criteria that society considers important for regulation. As Lessig (1999a: 224) argues: ‘If code is a
lawmaker, then it should embrace the values of a particular kind of lawmaking.’
This holds, first, for technology that is used as an enforcement instrument by traditional, legitimate,
law-makers. It should be noted that the acceptability of norm-enforcing normative technology is not
automatically guaranteed by the norm having been promulgated by a legitimate public law-making
body. The way in which a legal norm is translated and inscribed in technology is a separate activity
that should be assessed in its own right, because ‘law in the books’ is not and cannot be exactly the
same as ‘law in technology’ (cf. Hildebrandt & Koops 2007: 22-28). In the translation process, choices
and reductions take place, and these choices are not necessarily made by public authorities subject to
democratic checks and balances, but by technology developers who are at best subject to EDP
auditors. The rule as embedded in technology can hardly ever be the same as the rule established by
the legislature. Moving beyond norm-enforcing technology, democratic and constitutional criteria are a
fortiori relevant for norm-establishing technology by public bodies, because the regular checks and
balances of law-making risk being circumvented by this application of normative technology.
Second, democratic and constitutional values are arguably also relevant for technology where
private parties build in norms to influence users’ behaviour. Not only can ‘code’ be the long hand of the
law, but also the invisible hand of the market or of society.
In the latter case, particularly when there is
a large impact on the lives of people (e.g., an addictive component being added to tobacco, or
‘terminator technology’ used in genetically-modified organisms to prevent multiplication), it is valid to
investigate the acceptability of rules embedded in technology by non-state actors:
‘Some of the examples in this book show the potential of future code regulation to circumvent
constitutional safeguards. (…) Code as self-regulation is a very strong regulator. (…) ‘Code’ as a product
of self-regulation should, for that reason, be subjected to some of the criteria that were used to judge the
validity and legitimacy of legal systems.’ (Asscher & Dommering 2006: 249-251)
Traditionally, the acceptability of ‘private’ regulation can be and has been interpreted separately from
acceptability of ‘public’ regulation, but a sharp distinction between public and private regulation can no
longer be made as we are moving towards a world of polycentric governance. Polycentric governance
is the notion that regulation is effected from various, partly overlapping, circles of power. It combines
the vertical concept of multi-level governance (multiple, interacting layers of public regulation at local,
national, supranational, and international levels) with the horizontal concept of regulation by non-state
actors, creating a paradigm in which regulation is both vertically and horizontally a complex and
interactive process. From this perspective, normative technology as developed and applied by private
parties is an inextricable part of the regulatory arena, and as such merits being assessed by generic
It is important to realise that some authors, in their work on ‘code as law’, sometimes focus on only one of these two types of
‘code’. Brownsword (2004; 2005), for example, is largely concerned with normative technology that is used as a compliance tool to
mandatorily enforce legal norms. This establishes a very different context and perspective from authors, like Lambers (2006),
Helberger (2006), and Reidenberg (2007), who focus on normative technology as created in the private sector.
Electronic copy available at:
criteria for regulation. As Reidenberg (1998: 583) notes: ‘[t]he technical community, willingly or not,
now has become a policy community, and with policy influence comes public responsibility.’
Nevertheless, the acceptability of ‘publicly embedded’ rules in technology does differ, to a certain
extent, from that of ‘privately embedded’ rules. How big the difference actually is, will often depend on
the context. Sometimes, normative technology developed by private parties has a distinct ‘public
element’, e.g., DRM systems having received a stamp of approval by law-makers that outlawed the
circumvention of DRM, or search engines filtering results to hide material that the Chinese government
does not like its citizens to see. It can therefore be useful to assess the acceptability of ‘private’
normative technology with the same set of criteria as that used for ‘public’ normative technology, while
keeping in mind that, depending on the context, certain criteria from the set will be less relevant or
should be interpreted differently.
This means that it is useful to develop a single set of criteria for both norm-enforcing and normestablishing, public as well as private, normative technology; within this single set, however, the
interpretation of criteria and the relative weight accorded to them will be context-dependent.
4. Finding criteria for the acceptability of normative technology
Since my starting point in this essay is that normative technology, as it influences human behaviour in
unprecedented ways, should comply with criteria that society considers important for regulation, a
good place to start looking for criteria for acceptability of normative technology is to study criteria for
law. This is, of course, opening Pandora’s box, for libraries are filled with books on the question ‘What
is good law?’, but it is hard to find in any of these libraries the definitive volume that sums it all up
nicely. Not only are there opposing views on the very core of law, e.g., naturalist versus proceduralist
views of what constitutes law, but also, ‘good law’ is a fluid concept very much in debate under the
influence of current societal developments.
Rather than trying to find my way in these librarinths and derive acceptability criteria in a top-down
manner from a theory-based interpretation of law, I will approach the matter in a pragmatic, bottom-up
way, namely by looking at the criteria that are applied in practice by scholars writing about technology
regulation. I shall start with the few authors who have provided criteria for normative technology in
particular. Since this is only a small group, I complement them with authors who offer criteria, more or
less explicitly, for assessing technology regulation in today’s world of polycentric governance.
Although the latter do not address ‘code as law’ in particular, inspiration can be drawn from the
criteria they apply, because they touch upon topical areas where governance notions are challenged
by technology developments; using their work may therefore provide additional insight into relevant
elements of today’s range of views on the importance of democratic and constitutional values for
technology regulation. I should stress that the selection of this latter group authors is subjective, with
more than the usual amount of subjectivity that is inevitable in legal scholarship: I chose these rather
than others largely because I have recently been working with these texts. This need not be a
methodological flaw: it is based on the assumption that the acceptability of normative technology is
closely related to the acceptability of technology regulation in today’s world of polycentric governance,
and on the assumption that in any sufficiently large subset of authors on technology regulation, more
or less the same criteria will show up. Those who do not share these assumptions, I would invite to
use a different or larger collection of authors, or to use a top-down approach from legal theory, to see
to what extent this leads to substantially different results.
My methodology for finding criteria for the acceptability of normative technology thus is a heuristic
process. It does not lay a claim to procedural justice, in which the criteria would be valid because the
right procedure was followed to find them, but rather to outcome justice, in which the criteria are valid
because the outcome is accepted by the reader as a reasonable one. The proof of the pudding will be
in the eating. This implies that the resulting set of criteria is to be tested by legal scholars, who should
try and point out errors or gaps in the criteria set, so that in a re-iterative process a firmer and better
collection of criteria can be built.
4.1. Authors on normative technology
Lawrence Lessig does not provide an extensive framework for assessing normative technology in his
seminal book. His starting point is by and large the body of constitutional values, both substantive and
structural or procedural, with a priority for the latter: ‘structure builds substance’ (Lessig 1999a: 7-8).
Although he does not systematically specify this body of values, at certain points Lessig highlights
what he considers key values, notably liberty (p. 5), transparency (p. 224-225), and having a choice
(p. 237-239), neatly summed up in the closing lines: ‘liberty is constructed by structures that preserve
a space for individual choice, however that choice may be constrained’ (p. 239).
Electronic copy available at:
Joel Reidenberg likewise has not systematically articulated criteria for acceptability of the
emerging Lex Informatica, but several criteria can be inferred from his work. In his 1998 paper, he
tends to stress democratic supervision (‘Because technical designs and choices are made by
technologists, government policymakers should play an important role as public policy advocates
promoting policy objectives’; Reidenberg 1998: 580), and a need for optimally flexible rules, which can
be limited when fundamental values are at stake: ‘flexibility is only undesirable when fundamental
public interests are at stake and the public interest requires rules that individual participants in the
network might not choose themselves’ (1998: 580, 584). Focusing on normative technology as an
enforcement tool by governments, Reidenberg (2004: 229) applies two criteria: legal authority (‘[a]s a
threshold matter, states must have a legal process in place to authorize the use and choice of
technological enforcement tools’) and proportionality (‘the basic principle (…) should be that a state
only use the least intrusive means to accomplish the rule enforcement’). Normative technology that is
developed by private parties – technologists fighting against intellectual-property rights – threatens to
bypass the most fundamental principles: the rule of law and democracy (Reidenberg 2007: 19). Thus
we see Reidenberg applying, for diverse forms of normative technology, the criteria of rule of law,
legal authority, democracy, fundamental values, flexibility, and proportionality.
Kesan and Shah’s work is mostly concerned with describing rather than evaluating the processes
that shape normative technology, although they stress the importance of ‘societal concerns’ and social
values (Kesan & Shah 2004). In a more normative article, however, they have highlighted three
characteristics of code that regulators should pay attention to: transparency of rules, open standards
(which can be seen as transparency of the rule-making process), and default settings. The latter are
important, because users tend not to change default settings, partly because they have a legitimating
effect: apparently, the default is ‘normal’. This means that default settings ought to be made optimal
for users, in light of the values that are at stake for them (Shah & Kesan 2003: 5-8).
The main systematic attempt to date to offer a set of criteria to judge normative technology by, has
been made by Lodewijk Asscher in the Institute for Information Law’s ‘code as law’ project
(Dommering & Asscher 2006). He presents a fairly rough and tentative set of criteria, presented in the
form of questions, using ‘code’ to indicate ‘normative technology’ (Asscher 2006: 85).
1. Can code rules be understood? If so, are they transparent and accessible to the general public?
2. Can the rules be trusted? Are they reliable in the sense of predictability?
3. Is there an authority that makes the code rules?
4. Is there a choice?
This set can be summarised as: transparency, reliability, accountability, and choice. All of these are
procedural criteria.
Brownsword, in his discussion of ‘techno-regulation’ (his term for normative technology that
secures compliance, i.e., norm-enforcing technology), basically presents two criteria for regulatory
intervention: effectiveness and legitimacy; legitimacy comes down to respect for human rights and
human dignity (2004: 210). Brownsword seems to regard human rights and human dignity as one,
integrated criterion, interpreting human rights as the major manifestation of human dignity.
essential consequence of his view on human rights is that human beings should have a choice: the
autonomy that underpins human rights ‘implies the provision of a context offering more rather than
fewer options’ (p. 218). For human dignity, it is important not only that right choices are made (to
comply with the rules) but also that wrong choices can be made, and that not all ‘bad’ things are
simply made impossible, for human life is enriched by suffering. As Fukuyama (2002: 172-3) argues:
‘what we consider to be the highest and most admirable human qualities (…) are often related to the way
that we react to, confront, overcome, and frequently succumb to pain, suffering, and death. In the absence
of these human evils there would be no sympathy, compassion, courage, heroism, solidarity, or strength of
character. A person who has not confronted suffering or death has no depth.’
As a result, Brownsword’s key criterion for assessing compliance-proof normative technology is the
existence of choice (2004: 230-232). Implicitly, he also suggests that some kind of flawedness or
fallibility needs to exist: techno-regulation should not be allowed to create a perfect world without
suffering, since humanity is thereby reduced to flatness. A trade-off can be observed here between
effectiveness and choice: the more effective a techno-rule is, the less choice people have to disobey
it, and vice versa.
I leave aside the first question, which is a preliminary question (‘Can rules be distinguished in the code’) to assess whether the
technology at issue is normative.
7 At least, of human dignity as empowerment, which Brownsword opposes to human dignity as constraint. Brownsword himself
favours human dignity as empowerment (see p. 232) and hence emphasises the importance of human rights as the ultimate
touchstone (p. 234).
Electronic copy available at:
In subsequent work, Brownsword has outlined as (additional) criteria the principles of good
governance: transparency and accountability (2005: section III). Particularly interesting is the remark
that, even if techno-regulation is implemented in a fully transparent and accountable way, in due time,
transparency is lost because the rule built-in in the techno-object for later generations simply becomes
part of the object’s features and is no longer recognised for what it once was: a normative rule that
purposefully influences people’s behaviour. Then, ‘it is only outsiders and historians who can trace the
invisible hand of regulation’ (Brownsword 2005: section III(i)). Overall (with considerable interpretation
on my account), Brownsword’s criteria come down to effectiveness, respect for human rights, choice,
fallibility, transparency, and accountability.
In passing, Brownsword (2004: 223-224) also mentions a set of principles for good regulation
proposed by Trebilcock and Iacobucci in the form of oppositional pairs: independence
accountability; expertise detachment; transparency confidentiality; efficiency due process;
and predictability flexibility. This is also a productive set of criteria to use for technology regulation,
since it emphasises the interrelatedness of requirements and the often inherent tension existing
between criteria.
4.2. Authors on technology regulation
Bärbel Dorbeck-Jung has assessed regulation of nanotechnologies on requirements for good
governance. In her paper (Dorbeck-Jung 2007: section 2.3), she mentions as basic principles for
legitimacy in the democratic constitutional state: legality (rule of law), constitutional rights, democratic
decision-making and control, checks and balances of power, and judicial review, which should
guarantee the underlying basic values of freedom, equality, legal certainty, democracy, and
effectiveness. Moreover, in the current context of multi-actor and multi-level governance, additional
measures are required to ensure legitimacy: stronger participation of citizens in the regulation process,
increased transparency of regulation, and increased accountability and control of the impartiality and
objectivity of regulators. With some reshuffling and combining, Dorbeck-Jung’s good-governance
criteria may be summarised as: human rights (including freedom, equality); constitutional principles
(including legality, legal certainty, checks and balances); democratic decision-making (including
participation of citizens); accountability (including judicial review and control); and effectiveness.
Besides these specific criteria, Dorbeck-Jung (2007: section 2.3) also refers to Scharpf’s
distinction between input and output legitimacy. Input legitimacy implies legitimacy through rules-ofthe-game and the procedure followed, output legitimacy means that the result establishes legitimacy.
Although in this section, I aim at output rather than input legitimacy, in the context of normative
technology, input legitimacy is a primary concern. Because technology is often irreversible – once it is
developed and applied in society, it is hard to fundamentally remove it from society in those
applications – the process of developing technology is a key focus when normativity is at stake. After
all, it may well be too late when technology simply appears in society to ask whether it is acceptable to
use this technology; quite often, the genie may then be out of the bottle never to be put back in. It is
still relevant to look at output legitimacy, since the way in which normative technology functions in
society is in itself a useful yardstick, but the difficulty of reversing technology does imply that criteria
addressing the process of technology development – ‘rules of the game’ – should be a key part of our
acceptability criteria.
In the context of ICT regulation, together with TILT colleagues, I have developed a set of criteria
for assessing self-regulation (Koops et al. 2006). Self-regulation differs from normative technology, but
it is fairly similar in the way both differ from government regulation. This set may therefore be of
1. fairness (respect for fundamental rights and social interests);
2. inclusiveness (participation of relevant actors);
3. compliance;
4. transparency (both of the rule-making process and of the rules);
5. legal certainty;
6. context;
7. efficiency.
Some criteria for regulation in general are given by Christopher Hood, as described by Raab & De
Hert (2007): a tool of government must be selected after examining alternative possible tools, it must
be matched to the job, not be ‘barbaric’, and be effective with the minimum possible costs. These
criteria can be summarised as checking alternatives, effectiveness, moral values, and efficiency.
Another look at legitimacy, in this case of non-traditional regulatory actors, is Anton Vedder’s
analysis of Non-Governmental Organisations. He outlines three dimensions of legitimacy (Vedder
2007: 11-12, 38):
Electronic copy available at:
1. a moral dimension: legitimacy in terms of conformity to moral norms and values; these include
substantive values (e.g., related to core human rights) but also procedural moral values like
accountability, responsibility, and transparency;
2. a legal or regulatory dimension: legitimacy in terms of conformity to rules; this is related to the
notion of legality or (as I interpret it) the rule of law;
3. a social dimension: legitimacy in terms of consent or representation of those involved or affected;
this is related to representativeness.
Vedder posits a hierarchy by considering the moral dimension as primary and the legal and social
dimensions as secondary. Legal and social criteria should follow from or help to fulfil moral criteria.
This essentially means that the values and norms embedded in the regulatory subject ‘should be
acceptable in principle for all, and that those values and norms are integrated into the NGO’s
organizational structures and activities as fully as possible’ (p. 38). This could apply, mutatis mutandis,
to the values and norms built-in in normative technology.
5. A systematic set of criteria for acceptability of normative technology
Putting all of the above in a melting pot, we can concoct several sets of criteria, depending on the
desired level of abstraction (Table 1). The most abstract set of criteria is: substantive criteria,
procedural criteria, and criteria of result, but this is not very helpful for concretely assessing normative
technology. At a lower level of abstraction, we can split substantive criteria into human rights and
(other) moral values,
procedural criteria into principles like the rule of law and democracy,
transparency, and accountability, and principles of result into being flexible and allowing for choice. All
of these can be further detailed into enumerations of lower-level criteria. This can be summarised in
the following table, in which I have tried to include as many of the criteria listed in the previous section
as possible.
Level of
Level 1 criteria Level 2 criteria Level 3 criteria
Level 4 criteria (far from
human rights equality and non-discrimination
freedom of expression
substantive privacy
core substantive
principles other moral values autonomy
human dignity
success fallibility
rule of law due process
legal certainty
checks and balances
core procedural
democracy democratic decision-making
all-stakeholder participation
transparency of rulemaking
checking alternatives justifying choices
accountability review
other procedural
efficiency proportionality
choice effectiveness possibility of choice
optimal default settings
flexibility context-adaptability
result criteria principles of result
transparency of rules
Table 1. Overview of criteria
When discussing criteria for assessing normative technology, it is useful to bear in mind the level of
abstraction at which the criteria are formulated. Level 1 through level 4 moves from abstract to
I see moral values as a broad category which encompasses human rights. This row can therefore be seen as including all those
substantive moral values that are not established as specific human rights.
There is substantial, unavoidable overlap between various categories and criteria in this table, which for the sake of clarity I have
simplified away.
Electronic copy available at:
concrete; this should not be confused with importance: some level 4 criteria, like autonomy and
equality, are very fundamental, but they are simply more concrete than the abstract notion of ‘core
substantive principles’.
I consider level 3 in this table the aptest level of abstraction for discussing the acceptability of
normative technology, at least for the purposes of this essay, which looks at normative technology at
large. When assessments are made of concrete instances of normative technology, a finer-grained set
of criteria at level 4 will be more suited.
Within the set of criteria, it may be useful to posit a hierarchy, in that the upper-half criteria of core
substantive and core procedural principles are more important than the lower-half criteria of other
principles of procedure and result criteria. Such a hierarchy implies that the primary criteria (above the
line) should be met before the secondary criteria come into view (cf. Rawls 1972: 302-303). However,
there is always some (or much) tension between criteria, and with these still rather abstract and broad
criteria, assessments cannot yield yes-or-no answers, but rather more-or-less answers. In certain
cases, lesser scores on a primary criterion may well be considered to be outweighed by higher scores
on secondary criteria. But the hierarchical order does provide a bottom-line: if core principles are met
only to a low extent, then the overall assessment must be negative.
In summary, the acceptability of normative technology can be assessed by using the following,
hierarchically ordered, set of criteria:
1. primary criteria
o human rights
o other moral values
o rule of law
o democracy
2. secondary criteria
o transparency of rule-making
o checking alternatives
o accountability
o expertise independence
o efficiency
o choice effectiveness
o flexibility
o transparency of rules
How could this set of criteria be applied? Some authors, such as Brownsword, try to judge the
acceptability of normative technology as such, i.e., on the general level of ‘code as law’. Not only is it
extremely complex to answer this question on a general, abstract level, but also, it seems not to do
justice to the great variety in normative technology that is already visible today. Some instances, such
as road bumps and safety belts, are widely accepted, while other forms, including DRM and Internet
filtering systems, are considerably contested. Moreover, the acceptability partly depends on the
context and scale, so that it is hard to say whether or not a type of normative technology, such as
filtering systems, is acceptable if one does not know the scope of its filtering, whether it takes place in
the United States or in Japan, etcetera. Likewise, the interpretation and relative importance of criteria
evolves over time, and with the many stages that technology goes through (from fundamental
research and tentative experimentation through development and testing to application and
marketing), conclusions drawn today about the acceptability of normative technology may differ from
the conclusions drawn ten or twenty years from now.
It is therefore more fruitful, in my opinion, to ask whether or not a concrete instance of normative
technology is acceptable, and to answer this question for fairly specific instances in fairly particular
contexts, for example, a procedure and a computer program to block child pornography on the
Internet, a privacy-friendly identity-management system, or a ‘terminator’ mechanism in a geneticallymodified rice crop. It is beyond the scope of this essay to undertake an assessment of such concrete
cases. I restrict myself here to noting that a concrete assessment will never be a straightforward and
uncontested exercise. For one thing, several of the criteria are culture-dependent, in their
interpretation (e.g., moral values, democracy) or in their importance (e.g., human rights, choice). For
another thing, quite diverging models can be used in applying a criterion, for example, when assessing
efficiency or effectiveness. Moreover, for government-instigated normative technology as an
enforcement tool (the long hand of the law), a different interpretation of criteria and weights should be
chosen than for normative technology created in the private sector (the invisible hand of the market).
Altogether, the context-dependent application of the criteria necessitates a context-sensitive
procedure for interpreting and weighing of criteria, which is a challenge to develop. But however
Electronic copy available at:
flexible and multi-interpretable as the criteria may be, this list could provide a heuristic assessment
tool in the form of a check list for scholars and regulators with which to approach normative
technology, by going through the list and interpreting and weighing the criteria in light of the particular
6. Conclusion
This essay started from the notion that various instances of normative technology, in their intentional
steering of people’s behaviour, are not evidently acceptable. It argued that normative technology, both
the norm-enforcing and the norm-establishing variants, by public as well as by private parties, needs
to be assessed on its acceptability to society, and that democratic and constitutional values play an
important role in this. This is in line with the literature on ‘code as law’, which on the whole is quite
critical of and worried over normative technology. If we take the rise of normative technology as well
as democratic and constitutional values seriously, something should be done.
This essay has attempted to contribute to this, by presenting a checklist of criteria for assessing
instances of normative technology, consisting of substantive, procedural, and result criteria. The
heuristic process to find criteria for normative technology has yielded a rather classic list of criteria,
which at least in its higher levels of abstraction resembles often-used criteria for the acceptability of
law. This is not altogether surprising: in a world of polycentric governance, acceptability of regulation
translates into the most fundamental, overarching notions that we have, like justice and legitimacy,
and such notions can be applied to ‘law in the books’ as well as to ‘“law” in the technology’.
The added value of the set of criteria as presented in this essay, in Table 1, is that it is a
systematic presentation of criteria that can be applied to normative technology, based on a distinction
in levels of abstraction and on a hierarchy of primary criteria (e.g., human rights and democracy) and
secondary criteria (e.g., transparency of rule-making and rules, accountability, and a trade-off between
effectiveness and choice). At levels 1 through 3, the set aims at being comprehensive (readers are
invited to point out gaps or unnecessary overlaps), and the criteria at level 3 of abstraction are
proposed as providing a useful checklist for concrete assessments.
Not all criteria are equally relevant in specific cases, and criteria may have to be interpreted
differently depending on the context of the specific instance of normative technology. Also, as notions
of acceptability co-evolve over time with social, cultural, and institutional settings (Rip 2006), the
criteria will have to be regularly re-assessed themselves. Nevertheless, this set of criteria can be used
by scholars and regulators as a checklist with which to approach normative technology. At the least,
using such a checklist guarantees that all relevant values are taken into account in an assessment,
and it makes more transparent the process of arriving at a conclusion on the acceptability of normative
technology. If authors, when applying the checklist, explain their stance on particular criteria and the
weight accorded to them, the academic and political debate about normative technology may be
raised to a higher level.
7. Agenda for further research
As noted in my description of the heuristic process of arriving at criteria, the proof of the pudding (the
set of criteria) is in the eating: the criteria set is to be tested by legal scholars, who should try and point
out errors or gaps in the criteria set, by using different selections of authors who apply criteria in
practice, by arriving at criteria from a theory-based interpretation of justice or legitimacy, or simply by
testing the set on a concrete instance of normative technology to check its usability. Thus, in a reiterative process, a firmer and better collection of criteria can be built.
A next step is to systematically assess concrete cases of normative technology, in which
deficiencies in acceptability can be noted. Building on such concrete cases, perhaps more overarching
conclusions about the acceptability of normative technology in general can be drawn if systematic
deficiencies should appear. Recommendations can then be given, not only for concrete cases, but
also for redressing generic acceptability deficits in normative technology. These recommendations
could be addressed to the developers, providers, or controllers of normative technology – often in the
private or semi-private sector – but also to regulators, since they can use regulatory frameworks to
steer normative technology in more acceptable directions.
A particular challenge for research is Ambient Intelligence (AmI), in which smart environments
continuously make instantaneous decisions on citizens and consumers based on profiles and large
collections of personal data. In an AmI environment, legal mechanisms to protect the privacy and
equality of weak parties – citizens, consumers, employees – will be inadequate, and hence, legal
norms should be incorporated into the AmI architecture itself to establish Ambient Law (Hildebrandt &
Koops 2007). The acceptability, in light of democratic and constitutional values, of Ambient Law as
Electronic copy available at:
embedded in the AmI infrastructure is an important field of further study and hence of application of
the set of criteria as presented in this essay.
Most of the current literature follows the line of research as outlined so far. Authors consider that
acceptability of normative technology follows from its compliance with acceptability criteria, and they
aim to enhance acceptability by controlling in some way the technology: for example, the rules should
be transparent (open source), the design process should be fair by discussing the rules-to-be-built-in
with all stakeholders, and the technology should leave a choice to users of non-compliance.
Yet this line of research – towards controlling normative technology – is not enough. The reverse
implications are much less considered but they are at least as important to study. What does
‘acceptability’ mean in a society that feeds upon normative technology? Notions like good governance,
democracy, and legitimacy are not set in stone, and are much debated today from the perspectives of,
for example, globalisation, multi-level governance, the increasing importance of multinational
enterprises and NGOs, to name a few developments associated with polycentric governance.
Normative technology should be added to the list of developments that trigger a reconsideration of
what it means to live in a democratic constitutional state. To be sure, we should preserve democratic
and constitutional values for lack of a better alternative, but we may well need to change our precise
interpretation of these values in today’s and tomorrow’s society.
If only because ‘code’ is not equivalent to ‘law’, the ‘rule of law’ cannot simply lay down all the
criteria for the ‘rule of code’; it shall need to adapt, if only to a small extent, to the particulars – positive
as well as negative – of normative technology. Research should be done, therefore, into questions
like: who is the demos that has a say in democratic rule-making in normative technology? How should
transparency be defined if transparent rules in technology, by the mere passing of time, slowly turn
into technological features without a normative flavour? What do autonomy and liberty mean when
technology increasingly restricts making choices, but at the same time opens up new paths to explore
where no-one went before, simply because the technological doors to these paths were formerly
closed? What do fairness and equality mean when we live in surroundings that make the choices for
us, based on individual preferences and profiles? What does ‘law’ mean in a world of Ambient
Intelligence and Ambient Law?
Rather than only look at normative technology from the perspective of safeguarding the
democratic constitutional state, we should thus also look at democratic and constitutional values from
the perspective of normative technology. The interaction between these two perspectives makes the
future of law fascinating, disturbing, daunting, and, more than ever, unpredictable.
Asscher, L.F. & E.J. Dommering (2006), ‘Code: Further Research’, in: Dommering & Asscher (eds.), Coding
Regulation. Essays on the Normative Role of Information Technology, The Hague: T.M.C. Asser Press 2006,
pp. 249-255.
Asscher, L.F. (2006), ‘“Code” as Law. Using Fuller to Assess Code Rules’, in: Dommering & Asscher (eds.), in:
Dommering & Asscher (eds.), Coding Regulation. Essays on the Normative Role of Information Technology,
The Hague: T.M.C. Asser Press 2006, pp. 61-90.
Brownsword, R. (2004), ‘What the World Needs Now: Techno-Regulation, Human Rights and Human Dignity’, in:
R. Brownsword (ed.), Global Governance and the Quest for Justice, Vol. 4: Human Rights, Oxford: Hart 2004,
pp. 203-234.
Brownsword, R. (2005), ‘Code, Control, and Choice: Why East is East and West is West’, 21 Legal Studies, pp. 1-
Dommering, E.J. & L.F. Asscher (eds.), Coding Regulation. Essays on the Normative Role of Information
Technology, IT&Law Series Vol. 12, The Hague: T.M.C. Asser Press 2006.
Dorbeck-Jung, B.R. (2007), ‘What can Prudent Public Regulators Learn from the United Kingdom Government’s
Nanotechnological Regulatory Activities?’, NanoEthics (forthcoming).
Fukuyama, Francis (2002), Our Posthuman Future. Consequences of the Biotechnology Revolution, London:
Profile Books 2002.
Helberger, N. (2006), ‘Code and (Intellectual) Property’, in: Dommering & Asscher (eds.), Coding Regulation.
Essays on the Normative Role of Information Technology, The Hague: T.M.C. Asser Press 2006, pp. 205-
Hildebrandt, M. & B.J. Koops (eds.) (2007), D7.9: A Vision of Ambient Law, FIDIS Deliverable, October 2007,
available at
Electronic copy available at:
Kesan, Jay P. & Rajiv C. Shah (2004), ‘Deconstructing Code’, 6 Yale Journal of Law & Technology, 2003-2004,
pp. 277-389, available at
Koops, Bert-Jaap & Ronald Leenes (2005), ‘“Code” and the Slow Erosion of Privacy’, Michigan
Telecommunications & Technology Law Review 12 (1), pp. 115-188, available at
Koops, Bert-Jaap, Miriam Lips, Corien Prins & Maurice Schellekens (eds.) (2006), Starting Points for ICT
Regulation. Deconstructing Prevalent Policy One-Liners, IT & Law Series Vol. 9, The Hague: T.M.C. Asser
Press 2006, 293 pp.
Lambers, R. (2006), ‘Code and Speech. Speech Control Through Network Architecture’, in: Dommering &
Asscher (eds.), Coding Regulation. Essays on the Normative Role of Information Technology, The Hague:
T.M.C. Asser Press 2006, pp. 91-140.
Lessig, L. (1999a), Code and Other Laws of Cyberspace, New York: Basic Books 1999.
Lessig, L. (1999b), ‘The Law of the Horse: What Cyberlaw Might Teach’, 113 Harv. L. Rev., pp. 501-546.
Raab, Ch. & P. De Hert (2007), ‘The Regulation of Technology: Policy Tools and Policy Actors’, in: Brownsword &
Yeung (eds.), Regulating Technologies, Hart Publishing 2008.
Rawls, John (1972), A Theory of Justice, Oxford: Oxford University Press, 1972.
Reidenberg, Joel R. (1993), ‘Rules of the Road for Global Electronic Highways: Merging the Trade and Technical
Paradigms’, 6 Harv. J.L. & Tech. 287.
Reidenberg, Joel R (1998), ‘Lex Informatica, The Formulation of Information Policy Rules Through Technology’,
76 Tex. L. Rev. (3), p. 553-584.
Reidenberg, Joel R. (2004), ‘States and Internet Enforcement’, 1 U. Ottawa L. & Tech. J., pp. 213-230, available
Reidenberg, Joel R. (2007), ‘The Rule of Intellectual Property Law in the Internet Economy’, 44 Houston Law
Review (4), preprint available at
Rip, A. (2006), ‘A Co-evolutionary Approach to Reflexive Governance – and its Ironies’, in: J.P. Voss, D.
Bauknecht, and R. Kemp (eds), Reflexive Governance for Sustainable Development, Cheltenham: Edward
Elgar 2006.
Shah, Rajiv C. & Jay P. Kesan (2003), ‘Manipulating the Governance Characteristics of Code’, 5 Info (4), pp. 3-9,
available at
Vedder, Anton (2007), ‘Questioning the legitimacy of non-governmental organizations’ and ‘Towards a defensible
conceptualization of the legitimacy of NGOs’, in: Anton Vedder et al., NGO Involvement in International
Governance and Policy: Sources of Legitimacy, Martinus Nijhoff Publishers 2007 (forthcoming), pp. 5-24 and
Electronic copy available at:

Get 20% Discount on This Paper
Pages (550 words)
Approximate price: -

Try it now!

Get 20% Discount on This Paper

We'll send you the first draft for approval by at
Total price:

How it works?

Follow these simple steps to get your paper done

Place your order

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Receive the final file

Once your paper is ready, we will email it to you.

Our Services

Paper Helper has assembled a team of highly skilled writers with diverse experience in the online writing circles. Our aim is to become a one stop shop for all your Academic/ online writing. Check out below our amazing service!


Essay Writing Services

At Paper Helper, we prioritize on all aspects that creates a good grade such as impeccable grammar, proper structure, zero-plagiarism, and conformance to guidelines. The principal purpose of essay writing is to present the author's evaluation concerning a singular subject about which they have made. Since Professionalism is the mother of every success, try our team of experienced writers in helping you complete your essays and other assignments.


Admission Papers

You have been trying to join that prestigious institution you long yearned for, but the hurdle of an admission essay has become a stumbling block. We have your back, with our proven team that has gained invaluable experience over time, your chance of joining that institution is now! Just let us work on that essay.How do you write an admission essay? How do you begin the essay? For answers, try Quality Custom Writers Now!


Editing and Proofreading

Regardless of whether you're pleased with your composing abilities, it's never an impractical notion to have a second eye go through your work. The best editing services leaves no mistake untouched. We recognize the stuff needed to polish up a writing; as a component of our editing and proofreading, we'll change and refine your write up to guarantee it's amazing, and blunder free. Our group of expert editors will examine your work, giving an impeccable touch of English while ensuring your punctuation and sentence structures are top-notch.


Technical papers

We pride ourselves in having a team of clinical writers. The stringent and rigorous vetting process ensures that only the best persons for job. We hire qualified PhD and MA writers only. We equally offer our team of writers bonuses and incentives to motivate their working spirit in terms of delivering original, unique, and informative content. They are our resources drawn from diverse fields. Therefore your technical paper is in the right hands. Every paper is assessed and only the writers with the technical know-how in that field get to work on it.


College Essay Writing

If all along you have been looking for a trustworthy college essay service provider that provides superb academic papers at reasonable prices, then be glad that you search has ended with us. We are your best choice! Get high-quality college essay writing from our magnificent team of knowledgeable and dedicated writers right now!


Quality Assignment/Homework Help

We give the students premium quality assignments, without alarming them with plagiarism and referencing issues. We ensure that the assignments stick to the rules given by the tutors. We are specific about the deadlines you give us. We assure you that you will get your papers well in advance, knowing that you will review and return it if there are any changes, which should be incorporated.

togel macau togel terbesar bocoran admin jarwo data macau