‘Back to the Lab Again’: Tackling Emerging Challenges to Human Rights Through Experimentalist Governance Methods
Jack Heron is a solicitor in the Employment, Pensions & Benefits Group at Matheson LLP and a former Bonavero Summer Fellow at REDRESS. He holds an undergraduate degree in Law and Politics from University College Dublin and recently completed the BCL at the University of Oxford.
In this blog, Jack discusses how the adaptability, coordination and learning possibilities presented by experimentalist modes of governance can be effective in developing nuanced and effective approaches to human rights, as noted by de Búrca and others. He then applies these insights to the emerging human rights challenges of artificial intelligence and climate change.
_______________
A Introduction
Human rights experimentalism through facilitating adaptability, coordination, and learning provides an effective framework for addressing emerging challenges to human rights. It requires states to take the challenges seriously, but also helps and pressures them to do so and to produce suitable solutions.
In this blog, I will explain the nature of human rights experimentalism and its techniques of adaptability, coordination, and learning. I will then discuss the challenges posed by artificial intelligence (AI) and climate change to human rights and show how their resolution can benefit from the use of experimentalist techniques.
B What is Human Rights Experimentalism?
This section will explain the nature and functioning of experimentalism, how it can be applied to the international human rights context, and how its features are helpful in dealing with human rights challenges.
I What is Experimentalism?
Experimentalism is a flexible and dialogue-centred governance philosophy based on ‘framework rule-making and revision through a recursive review of implementation experience in different local contexts.’[1] In a global context, experimentalist governance processes have five steps: initial reflection among stakeholders based on a broadly shared perception of a common problem; the development of a framework understanding with open-ended goals; the autonomous implementation of these goals by lower-level actors, including contextual adaptation where necessary; continuous provision of feedback to the centre from local contexts to allow for monitoring and review, and finally, the routine re-evaluation and revision of goals and practices in light of peer review and the shared purposes.[2] These processes do not operate purely on moral persuasion, but also contain the possibility of sanctioning non-cooperation by the production of unattractive alternatives (a ‘penalty default’).[3] They are inherently deliberative, use concrete experiences to fashion new possibilities, and allow actors to learn from, discipline, and set goals for one another.[4]
II What is Human Rights Experimentalism?
Gráinne de Búrca applies this framework to the international human rights treaty system in a coherent and instructive manner. Her explanation relies on treaties and treaty bodies as the highest international level, but there is no reason why this could not be administered by another international human rights institution, so long as the requisite feedback and review mechanisms were in place. In de Búrca’s explanation, treaties are based on a declared consensus that the need to protect a particular set of human rights is a common desire of states parties to a given treaty and articulate this set of rights in broad terms on which parties have been able to reach consensus.[5] Treaties facilitate significant autonomy by states and other actors in how to implement and realise these rights in practice, and establish a system of periodic reporting, monitoring, and feedback followed by a non-hierarchical peer review by the treaty body.[6] Iterative learning and re-evaluation are evident in how treaty bodies learn from each other and from actors at various levels (such as states and civil society actors) and develop their practice and interpretation of the treaties accordingly, thus feeding back local concerns into the globally-articulated framework understanding of rights.[7] We can therefore speak of human rights experimentalism as a coherent concept.
III Experimentalist Techniques and Human Rights Challenges
This section will outline how the adaptability, coordination, and learning possibilities of human rights experimentalism can help address human rights challenges.
First, the autonomy and feedback features of experimentalism produce much needed adaptability. If one approach is not working in a particular local context then another can be tried, rather than a single one-size-fits-none approach being dictated from on high. This is important given the vastly different domestic contexts to which international human rights norms must be adapted if they are to succeed. Experimentalism is thus ‘well-suited to diversity’[8] and can help to deal with the typical challenge human rights face of local resistance to ‘alien’ norms. Lower-level actors can indigenise human rights norms to a greater extent, giving them more of a chance of being adopted and followed, as in de Búrca’s example of Pakistani women’s organisations adapting women’s human rights to an Islamic context to reach rural women.[9]
Secondly, the multi-level nature of human rights experimentalism allows it to benefit from coordination at the international level. This increases the sustainability of a comprehensive approach to the problem and reduces the risk of fragmentation; if the global-level approach is informed by difficulties and opportunities at local level, it can adapt to be sensitive to local needs while keeping the ‘big picture’ in mind. This is vital because without the international level sitting above all the various local contexts, coordination and coherence become much more difficult. The benefits of cooperation may decline, resulting in a state-by-state approach to global problems. The existence of a background ‘penalty default’ is an important part of coordination that moves it beyond mere persuasion; de Búrca points to aid conditionality and consumer boycotts here,[10] but a penalty default can also involve the imposition of an unattractive default ruleset or political settlement where actors fail to agree.[11] The benefits of coordination are evident in the human rights context from the role played by ‘transnational networks’ such as the Child Rights International Network in linking national NGOs together and providing a set of tools for responding to human rights violations.[12]
Finally, human rights experimentalism enables local actors to learn from each other through the review and feedback mechanisms within the international framework. This pairs well with adaptability; if one actor tries an approach to a human rights problem and finds it lacking, they can provide feedback to other actors through the experimentalist process, and those other actors can make an informed decision either to avoid this approach or to try to modify it to fit their local context. Learning also helps to build a more cohesive transnational understanding of human rights challenges, enabling actors to anticipate these challenges based on the experiences of others and to act quickly to remedy them. The circular and multi-level nature of the process ensures that new insights are produced constantly, allowing actors to learn from each other on a continual basis.
C Experimentalism on the Frontier of Human Rights
Having demonstrated the relevance of experimentalism to human rights, I will now proceed to explain some emerging challenges in human rights and discuss how experimentalist techniques could help in addressing them, focusing primarily on AI with a briefer discussion on climate change.
I Emerging Challenges in Human Rights
(a) Artificial Intelligence
The rapid growth of AI poses serious challenges for human rights, related to discrimination, technological complexity, and challenges to existing principles of territoriality and state responsibility.
Firstly, the use of algorithms in public and private contexts exacerbates the existing discrimination faced by minority groups, as well as introducing potential new kinds of discrimination. AI can exacerbate existing discrimination by failing to accurately perceive members of racial minority groups,[13] or by imposing performance requirements that fail to account for the circumstances of minorities.[14] The predictive power of AI can also introduce new kinds of discrimination that do not map onto existing protected characteristics in human rights law and are not as intuitively observed as more established kinds of discrimination.[15] This may be difficult for judges applying existing anti-discrimination doctrines to navigate, particularly where such doctrines require the use of judicial intuition and thus may struggle to adapt to new forms of discrimination. EU anti-discrimination law and its reliance on ‘contextual equality’ is a good example of this.[16]
Secondly, the technological complexity of AI makes it difficult to formulate solutions that can adequately keep pace with technological change and limits understanding of algorithmic decision-making processes. Most lawyers and political leaders are not technologists and there is a danger of over-deference to technologists if they can produce an apparently workable solution, producing a ‘rule by technology’ that may not adequately address human rights concerns. This complexity also produces a ‘black box’ effect that obscures algorithmic reasoning processes and makes it very difficult to assess whether human rights are being appropriately considered.[17]
Finally, the rise of AI strains the applicability of existing legal principles of territoriality and state responsibility. To an extent this is a problem with the digital realm in general, but there are specific AI-related dimensions here too; data can be gathered from people in one state and used to make predictions that affect the lives and livelihoods of people in another state, and if this is all done by a private actor, international human rights law - with its focus on violations by states within their own territories - may at first glance have very little to say about it. Overall, AI is clearly a significant challenge for human rights.
(b) Climate Change
Climate change also poses significant challenges for human rights, producing major conflicts with state interests and (as with AI) difficulties with existing principles of territoriality and state responsibility.
First, while compliance with most human rights obligations requires some level of sacrifice of other state interests, the level of systemic economic change required to halt and reverse human degradation of the environment is significantly greater than would be required to comply with many other human rights obligations (which themselves are not adhered to by many states). For example, the need to cut fossil fuel usage conflicts with the development goals of many states which rely on these fuels, and overconsumption in the global North is politically difficult to reduce.
Additionally, the harms caused by climate change often occur years after and a long distance away from where the violations were perpetrated, which makes it difficult to apply conventional principles of territoriality and state responsibility. While human rights violations for climate change harms have been found in some cases, this is quite limited. In Billy v Australia the Human Rights Committee found a violation of various ICCPR rights arising out of harms caused by climate change, but only on the basis that the Torres Strait Regional Authority (part of the Australian federal government) had a positive obligation to provide various mitigation measures.[18] This represents progress for the Torres Strait Islanders, but what about countries like the Maldives, which according to some reports could be 80% underwater by 2050?[19] As an independent nation state, the Maldives cannot rely on this same protective obligation against large polluting states many thousands of miles away from it. Climate change thus poses serious challenges for human rights.
II How Can Human Rights Experimentalism Help?
(a) Artificial Intelligence
The flexible and dialogue-centred nature of human rights experimentalism - exemplified in the techniques of adaptability, coordination, and learning - is well-suited to deal with new and emerging challenges to human rights like AI.
Firstly, adaptability is required in relation to algorithmic discrimination because some existing local approaches may be less suited than others to addressing this problem - for example, by relying on judicial intuition[20] - and thus will require different adjustments. Coordination and learning are also helpful, as different states may have very different opinions regarding the acceptable limits of anti-discrimination law. Participation in the peer review process may assist local NGOs to push their governments to improve their commitments in relation to both algorithmic and regular discrimination.[21] A penalty default in the form of some kind of aid conditionality may be appropriate here to keep all states on board to work towards solving the problem.
In relation to technological complexity, coordination and learning are essential to pool knowledge about AI and its impacts on human rights; as a very new technology, some states and actors will have significant knowledge gaps about AI, and other actors (for example, state-sponsored bodies like the Turing Institute, or the various UN Special Rapporteurs who have studied this matter) can assist in remedying this. This could produce an alliance between lawyers and political leaders on one hand and technologists on the other, each building on the other’s expertise and strengths, to produce an approach (or set of approaches) that is literate in both human rights and technology.[22]
Finally, the subversion of territoriality and responsibility principles can only really be solved by a transnational coordinated approach such as that made possible by experimentalism. Private actors are important players in the AI sphere and harms to privacy and other rights are very difficult to pin to a single national territory, but the coordination inherent in the experimentalist process can remedy this in two ways. It may encourage closer cooperation between states to develop bilateral or multilateral agreements to regulate AI, but even if that is a long way off, human rights experimentalism may produce informal cooperation and alignment of standards through the learning and coordination mechanisms, which could close regulatory gaps and make it harder for private actors to escape accountability. Indeed, the re-evaluation and revision process may recommend greater alignment of national legislation and may have helpful suggestions for how to do this, drawing on experience from many different local contexts. The fact that experimentalism is not a purely international approach is crucial here; the strong domestic component means that international learning can result in domestic legislation with real consequences for human rights violations caused by AI.
(b) Climate Change
The challenges posed by climate change may also benefit from an experimentalist approach. The dangers of climate change for human rights operate as a weighty penalty default for states who are not inclined to participate, and this can be compounded by trade sanctions or boycotts of goods produced in violation of human rights and environmental standards. While domestic politics will likely continue to have the ultimate say in a state’s climate policy, the international pressure mechanisms created by human rights experimentalism can bolster action by civil society groups and opposition parties in local contexts, and in this way conflicting state obligations may become less of an issue. Climate science is constantly evolving, and climate change affects some states worse than others earlier on, so the learning function of human rights experimentalism allows states to learn from each other’s experiences in mitigating the human rights effects of climate change while preserving national interests.
Human rights experimentalism may also help to overcome the difficulties with territoriality and state responsibility. Greater cooperation can lead to a greater ability to hold private actors to account, and feedback from low-lying states (such as the Maldives or Indonesia) who are most at risk from climate disasters and rising sea levels may help to reinforce a sense of responsibility on the part of net polluter states (like the US, China, and most of the global North), as well as ‘bringing climate commitments home’ by the involvement of NGOs and other civil society organisations. Human rights experimentalism thus amplifies existing discourses about climate justice and the need to rethink elements of the international system.
D Conclusion
Human rights experimentalism clearly provides a constructive framework for addressing emerging challenges to human rights like AI and climate change. It cannot do everything by itself; states must engage in good faith with the experimentalist process, listen as well as contribute their own insights, and take bold action on the ground where necessary. However, while not the only solution, human rights experimentalism does provide a flexible and dialogue-centred process which pressures states to cooperate, and which allows for the emergence of insights informed by looking at all dimensions of a problem rather than just from the bottom up or the top down.
[1] Charles F Sabel and Jonathan Zeitlin, ‘Experimentalist Governance’ in David Levi-Faur (ed) The Oxford Handbook of Governance (Oxford University Press 2012) 169.
[2] Gráinne de Búrca, Robert O Keohane & Charles Sabel, ‘Global Experimentalist Governance’ (2014) 44(3) Brit J Pol Sci 477, 479; Grainne de Burca, 'Human Rights Experimentalism' (2017) 111 Am J Int'l L 277, 282.
[3] ibid.
[4] Sabel and Zeitlin (n 1) 170.
[5] De Búrca (n 2) 285.
[6] ibid.
[7] ibid 293-297.
[8] Sabel & Zeitlin (n 1) 175-176.
[9] Gráinne de Búrca, Reframing Human Rights in a Turbulent Era (Oxford University Press 2021) 56-59.
[10] De Búrca (n 2) 282.
[11] De Búrca, Keohane and Sabel (n 2) 478; Sabel and Zeitlin (n 1) 176.
[12] De Búrca (n 2) 292-293.
[13] Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ (2018) 81 PMLR 1, 8-11.
[14] Tetyana (Tanya) Krupiy and Martin Scheinin, ‘Disability Discrimination in the Digital Realm: How the ICRPD Applies to Artificial Intelligence Decision-Making Processes and Helps in Determining the State of International Human Rights Law’ (2023) 23 HRL Rev 1, 1-2; Solon Barocas & Andrew D Selbst, 'Big Data's Disparate Impact' (2016) 104 Calif L Rev 671, 714-732; Aislinn Kelly-Lyth, ‘Algorithmic discrimination at work’ (2023) 14(2) ELLJ 152, 155-156; Aislinn Kelly-Lyth, ‘Challenging Biased Hiring Algorithms’ (2021) 41(4) OJLS 899, 903.
[15] Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI’ (2021) 41 CLS Rev 1, 5.
[16] ibid 6-19.
[17] Kelly-Lyth (n 14) (2023) 167-168; Kelly-Lyth (n 14) (2021) 928; Wachter, Mittelstadt and Russell (n 15) 5.
[18] Daniel Billy and others v Australia, Comm No 3624/2019, UN Doc CCPR/C/135/D/3624/2019 (21 July 2022).
[19] Adam Voiland, ‘Preparing for Rising Seas in the Maldives’ (NASA Earth Observatory, 19 February 2020) <https://earthobservatory.nasa.gov/images/148158/preparing-for-rising-seas-in-the-maldives> accessed 24 March 2024.
[20] Wachter, Mittelstadt and Russell (n 15) 6-19.
[21] An example of this in the context of children’s rights in Albania can be found at de Búrca (n 2) 304-309.
[22] An example of this in the context of the European Union’s SURVEILLE project can be found at Martin Scheinin and Tom Sorell, ‘SURVEILLE Deliverable D4.10: Synthesis report from WP4, merging the ethics and law analysis and discussing their outcomes’ (European University Institute 7 April 2015) <https://surveille.eui.eu/wp-content/uploads/sites/19/2015/04/D4.10-Synthesis-report-from-WP4.pdf> accessed 24 March 2024.
https://www.corkonlinelawreview.com/single-post/back-to-the-lab-again-tackling-emerging-challenges-to-human-rights-through-experimentalist-govern
An experimentalist approach can easily be misconstrued as a term implying an optional approach to a new area. As this exciting COLR discussion clarifies, an experimentalist approach is of huge relevance to energy transition and AI.
By providing a manner of learning step-by-step from existing data, experimentalist approaches allow the efficient creation of improved solutions. This is valid in human rights which, regardless of our plethora of personal philosophies, increasingly permeates all our socio-economic interactions in professional or relaxed contexts.
This is deftly highlighted in the challenge to AI section covered here. The point made re: ‘over-deference to technologists’ resounds true with my personal juxtaposition alarm! After all, how can technologists be relied upon to coordinate socially beneficial technologi…