—Sofia Ranchordas, University of Groningen
[Editor’s note: This is one of our biweekly I-CONnect columns. For more information about our four columnists for 2020, please click here. In 2020, Professor Ranchordas will blog about public law and technology, sharing some insights from her recent scholarship on digital exclusion as well as recent developments in this emerging subfield of law. Her scholarship lies at the intersection of public law and technology and it seeks to understand how digital technology is reshaping public values and challenging fundamental rights.]
In October 2019, the newspaper The Guardian dedicated a full week to the “automation of poverty,” analyzing controversial governmental practices throughout the world that involve employing technology not only to determine welfare eligibility but also to closely monitor welfare recipients. The use of digital technology and other surveillance techniques to prevent welfare fraud had also been criticized months earlier by the UN Rapporteur on Extreme Poverty and Human rights, as well as in recent literature on data-driven social security. This blogpost continues this discussion, identifying some of the legal problems of automating welfare services, particularly when this automation involves private actors.
In the series “Automating Poverty,” The Guardian reported, for example, that in India, an identification biometric system now determines access to food stamps, pensions, and medical care to reduce the risk of fraud. Glitches in this system have nonetheless proven to be difficult to solve in a timely manner, with lethal consequences as a result. In the Netherlands, public bodies have succeeded at automating numerous social security services in the last years. A particularly controversial project in this context is SyRI, a risk assessment system developed by the Dutch Ministry of Social Affairs, which predicts an individual’s probability of committing benefits fraud by analyzing large pools of personal data collected by several government agencies. When someone is ranked as ‘high-risk,’ public institutions are notified to start an investigation. While SyRI has not been particularly effective in detecting fraud, it has greatly contributed to increasing the stigma that accompanies poverty and welfare, as this system has primarily targeted low-income neighborhoods and minorities. Last year, a number of national human-rights and privacy associations and celebrities started a lawsuit against the Dutch state arguing, among other things, that this system is a disproportionate tool to protect the welfare state and violates Article 8 of the European Convention of Human Rights (the right to privacy). The District Court of The Hague is expected to deliver its much awaited decision later this month.
Despite its problematic character, SyRi and other digital surveillance systems developed by public bodies are not the only types of automated systems that are susceptible of violating human rights in the digital age. Most public law scholars have thus far—and rightly so—shown concern regarding the technological upscaling of existing societal stigmas against welfare recipients, the opacity of the employed algorithms, and the challenge of complying with the duty to give reasons when using proprietary systems. Despite these challenges, many governments are expanding their welfare fraud detection toolbox not only by including automated systems but also by involving private actors at several levels. For example, in the United Kingdom, IBM has developed a residents’ index that links data sources from across the London Borough of Camden and uses probabilistic matching techniques for identity verification and fraud detection. Similar services are provided to municipalities in the Netherlands by the companies Totta Data Lab and Ynformed. Private actors are nowadays employed as informants (e.g., citizens that share information with public authorities regarding their social media connections), detectives with advanced technological skills and equipment or as mere providers of digital technology in the fight against welfare fraud. However, this phenomenon has been largely overlooked in the legal literature. In democratic and high-trust states, there is a difference between the public automation of welfare services and fraud investigations and the outsourcing of this automation to private actors. As the literature on privatization has demonstrated time and again, in the latter case, there is a risk that citizens will be made worse off.
The involvement of private companies in welfare investigations enhances the risk of unfair, disproportionate and discriminatory treatment of citizens due to the misalignment of interests and values between public and private parties. In the remainder of this blogpost, I provide a brief overview [for a more extensive analysis see here] of some of the problems of automating and outsourcing poverty or, in other words, of privatizating welfare through technology.
The outsourcing of inherently governmental tasks
Many jurisdictions throughout the world are acquainted with the concept of “inherently governmental tasks,” that is, public functions that should not be delegated to private actors (without a specific legal framework). It is worth asking whether the detection and sanctioning of fraud through highly intrusive technological means is one of these “inherently governmental tasks.” In favor of this argument, one could contend that welfare investigations affect the exercise of fundamental rights and the pursuit of the public interest. The misalignment of interests between public bodies and private contractors is an often-invoked objection against privatization. We expect this misalignment of interests to occur in particular when the remuneration of private detectives is directly or indirectly dependent on how much fraud they are able to detect, or where private companies are able to draw economic benefits from the data they collect during investigations. Will private agents also try to gather exculpatory evidence? Will they employ control variables to limit the weight of historical data even when there is a strong societal belief that some minorities are prone to committing fraud? The vague character of the concept of “inherently governmental task” as well as the argument that the public-private divide is an outdated framework could be offered as a logical counterargument.
Proportionality of surveillance
Welfare fraud investigations involving intrusive surveillance conducted by public or private actors must comply not only with clear and specific rules but also (in many countries) with the principle of proportionality. It is easy to imagine how private welfare investigations can become disproportionate to the aim of combatting fraud. In Switzerland, a recent legislative amendment, following the ECHR Vukota-Bojic judgment, now allows public and private insurance companies to employ private detectives and surveillance technology to follow and record welfare recipients anywhere in the public space. The emergence of private social welfare spies that can draw on large amounts of evidence and follow recipients raises multiple questions regarding the proportionality of this policy (see also Big Brother Watch and others v. United Kingdom). Considering that outsourcing enforcement tasks to private actors exacerbates the risk of violation of fundamental rights, public institutions should make a double proportionality assessment: first, is it necessary and adequate to outsource welfare investigations to private actors, and what are the risks of doing so? Second, the need to use additional private agents to gather evidence should be balanced against the right to violate the privacy of welfare recipients.
Accountability
The lack of accountability or the challenge of applying public law obligations to private contractors is one of the oldest challenges of privatization and outsourcing public tasks. This challenge is also present when welfare fraud investigations are outsourced to private agents. In the Netherlands, performance contracts have been designed to ensure that private investigators would only be remunerated for their services if fraud is detected. These contracts do not establish a clear contractual relationship between state actors and private actors during the investigation phase which means that private investigators cannot be qualified as state actors while gathering evidence. In practice this expands the evidence-gathering toolbox of these actors beyond the legal limits. Dutch administrative courts have invalided the effects of this type of contract, underlining that they violate the right to due process and equality of arms. Moreover, the outsourcing of powers to private actors should not mean that these powers are expanded beyond what public actors were allowed to do or that accountability and proportionality assessments are bypassed. As Catherine Donnelly explains: “Given that the power does not change when transferred from public to private, […] the controls on [this] power should [not] be different: control should depend on the nature of the power—not on the identity of the power-holder.”
Conclusion
The involvement of private actors and the use of automated systems developed by private technology companies illustrate new dimensions of the growing privatization movement: on the one hand, governments seek to optimize their systems, offer a fair allocation of services, and rely on the expertise of private technology companies. On the other, by relying on informants, private detectives and private digital technology, governments may have limitless possibilities to obtain information on the lives of welfare recipients, further stigmatizing poverty. In this context, public authorities delegate directly or indirectly the tasks of predicting and detecting fraud to private actors not only to save public money but also to expand their access to information about welfare recipients. Lawyers currently worry about the automation of the State and the use of opaque algorithms. However, technology has also brought back an old concern to the research agenda of legal scholars: the limits of privatization and outsourcing of public functions. In the digital age, the privatization movement is expanding along the path of technology and the welfare state is not only being automated, it is also being outsourced.
Suggested citation: Sofia Ranchordas, Public Law and Technology: Automating Welfare, Outsourcing the State, Int’l J. Const. L. Blog, Jan. 15, 2020, at: http://www.iconnectblog.com/2020/01/public-law-and-technology-automating-welfare-outsourcing-the-state/
Comments
2 responses to “Public Law and Technology: Automating Welfare, Outsourcing the State”
[…] Sofia Ranchordas schreef een interessante bijdrage over Automating Welfare, Outsourcing the State http://www.iconnectblog.com/2020/01/public-law-and-technology-automating-welfare-outsourcing-the-sta…. Daarin stelt zij […]
Most interesting. I have a copy of Virginia Eubanks, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor”(Picador, 2019; I’ve also read the “Guardian” series. Modern governments are definitely using technology to shift the costs of poverty away from themselves onto the poor. There seems to be no political will to reverse this trend either. Thanks for your work though.