
by Dominika Dziwisz
In our ongoing series exploring the impact of the COVID-19 crisis in the CSEE region, Dominika Dziwisz assesses the complex challenges and uncertainties of citizen data gathering, surveillance and privacy.
In 1971 two American palaeontologists decided to take a look at the history of evolution from a different angle. Based on their previous research, Niles Eldredge and Stephen Jay Gould formulated the theory of punctuated equilibrium, according to which populations of living organisms experience significant evolutionary changes during short and rapid events. In other words, evolution is not a slow and continuous process, but instead it occurs rapidly during times of crisis. The COVID-19 pandemic seems to be exactly this kind of special moment for our civilization. Stress and the uncertainty associated with it are accelerating social changes, blurring the boundaries between the physical, digital, and biological worlds.
At this turning point, world leaders are faced with choices fraught with dangers, but also opportunities. According to Naomi Klein’s “shock doctrine”, the pandemic enables them to achieve economic and political goals that their societies would otherwise strongly oppose. Some governments and corporations are already taking advantage of the COVID-19 crisis to contrive their own vision of a future in which every human activity can be tracked through data analytics. In certain instances, this even takes the form of hastily changing the laws and regulations for personal data access and usage. And while that type of discussion is relevant and important, decisions should never be made without appropriate consideration. Hasty action, allegedly justified by an intention to respond to the pandemic and save lives, increases the risk of introducing insufficiently tested or even dangerous technologies that will remain with us for the foreseeable future.
On the other hand, one might ask if saving even one life is not worth temporarily sacrificing some of our privacy. In Western culture, citizens put the highest value on human life and health. Therefore, such an end should justify these means. In the history of mankind societies have had to suffer far more severe hardships to survive, so our current privacy concerns might be just “first world problems”. And, unfortunately, there are no easy answers here. The one thing that is certain is that governments are obliged to do a lot more to protect their citizens’ rights. This post looks a bit more deeply into the problem.
Brave New Privacy Era
Widespread fear of the virus has lowered vigilance and concerns about the ways in which private companies and governments may use information about citizens. Therefore, while the world currently waits for the second wave, it is a good moment to review what is actually at stake. In the name of safety, systems which monitor and collect data regarding people’s location, history of meetings, and purchases, as well as body temperature, heart rate, and other parameters are being implemented. Facial recognition cameras are being used to detect gatherings and identify participants. In other words, in order to combat the pandemic, various states are implementing systems that enable granular tracking of all their citizens’ activities in the physical world.
An even bigger problem relates to data storage and retention policies. In the age of Big Data, collected data is processed and compiled endlessly in combinations that are currently impossible to predict. The European Union is aware of this problem, and to address it on May 25th, 2018 the General Data Protection Regulation was implemented. The GDPR created a catalogue of clearly defined rules of conduct regarding the processing of personal data and put many restrictions on personal data processors. In addition, subsidiary regulations for processing electronic communication data, e.g. location data from mobile devices, were created.
This is a great founding block. The GDPR guarantees that privacy policies contain adequate provisions and will be respected. The underlying assumption here is that being aware of all the possible ways their data can be used, individuals may decide if that suits them and if they want to enter into such a relationship, and the invisible hand of the market will weed out entities that are too greedy. The problem starts when the entity under consideration is the state. In such a case an individual may be unable to opt out and is forced to participate. Currently, citizens need to accept that all the aforementioned personal data that are currently being collected might be transferred, stored and processed indefinitely. Officials will do their best to protect this information from leaks and hackers, but might retain data access for the sake of protecting society from current, and future, pandemics and other dangers. This would be done by current and future governments, in full accordance with the privacy policy. On the one hand, this is understandable because the direction in which the virus will spread is extremely unpredictable, but on the other hand, indefinitely long data retention seems extensive, risky, and uncomfortable.
Privacy or Prejudice
Until now we have mostly been discussing hypothetical scenarios that might take place in democratic countries, e.g. members of the European Union. The conclusions might be concerning, but might also be exaggerated. To verify that, we need to look into what is actually taking place. Half a year since the pandemic started, we have several anti-corona apps to analyse and learn from. For a start, the Polish ProteGO Safe, the Austrian Stopp Corona, the Czech eFacemask, or the Slovak Become a Healthy do not say for how long the data will be stored.
The list of concerns about the safety of using tracking applications is much longer. Among other things, it is possible to point out the risk of de-anonymization and reconstruction of social connections maps, and the threat invited by turning Bluetooth permanently on. Moreover, there are risks inherent in automated decision making that could have serious consequences for people using these apps. The applications implement an automated profiling mechanism, which means that it is the algorithm that “decides” which coronavirus risk profile to assign to a specific person based on their social interactions. The use of profiling results may be particularly controversial in countries undergoing a democratic crisis. One could argue that by introducing a small change to the algorithm it is possible to switch it from searching for those who need to be quarantined because they might have been infected with the Covid-19 virus, to searching for those who might have been infected with an idea of which the government does not approve. Signals applied in both models (such as frequency of contacts, or history of places visited) are exactly the same in both cases. Last but not least, in the majority of cases the division of responsibilities and access privileges between the government and the private entity maintaining the app is not defined clearly enough.
Given all of that, it is becoming obvious that a supranational body of subject-matter experts is needed to advise and monitor governments’ behaviour in this domain. The mere lack of choice and the introduction of obligatory requirements poses a risk of violating basic human rights. There is a need to establish clear guidelines and expectations when it comes to:
– national governments’ guarantees regarding the purposes for which the data can be used. Potentially, the use of data for non-medical public purposes, e.g. punishing citizens, should be forbidden,
– specifying provisions that need to be put in place in privacy policies. There are 80 different contact tracking apps around the world, and 20 percent of them do not have a specified privacy policy,
– defining the level of protection of personal data against the access of unauthorised persons,
– careful consideration of data retention policies,
– conducting an information campaign explaining the rules of data handling. For personally identifiable information, it is especially important that governments publicly announce what data is being collected, why it is collected, with whom it will be shared, how it will be secured and for how long it will be kept. Otherwise, the risk of losing public confidence in the actions taken is high.
In parallel to those very tactical concerns, we need to address more strategic questions:
1) What is the actual efficacy of social surveillance for controlling the spread of the disease? Are the methods used adequate to the risk incurred, and what is their effectiveness?
2) What are the consequences of mass data collection in the future? How can the collected data (not only medical) be used?
3) How can the proper conditions for respecting the right to privacy during and after the pandemic be ensured and how can privacy be guaranteed at a sufficient level?
Answering these questions and reaching a social consensus on this very delicate matter will be the goal of further research on of the development of modern technologies in connection with human rights. Moreover, the response to these challenges is only the beginning of a long-term and continuous process because newer technologies and their applications are continuously being launched.
Is 2020 the new 1984?
A pandemic, like any other crisis, is a catalyst for innovation. However, the current situation differs from previous ones because we are not creating new weapons to fight a hostile state, but to fight an invisible, difficult enemy in order to save people’s lives. From the beginning it was understood that the chances of finding a vaccine against COVID quickly were small, and modern ICT solutions can effectively support the methods of fighting the pandemic and controlling its spread. Policy makers have been forced to weigh the risks of violating the law against the benefits of combating disease. They put citizens’ privacy in jeopardy, and we can now conclude they didn’t sufficiently focus on creating mechanisms to audit data processing policies. Furthermore, we can be sure that the future will bring many more temptations to use modern surveillance methods to protect societies from things that governments perceive as threats. Hopefully, democratic means for overseeing the government will ensure that no European country will introduce a program similar to the Chinese Social Credit System implemented in 2017, which uses algorithms on a mass scale to evaluate the behaviour of Chinese citizens. Nevertheless, this is the last moment to consider what sort of oversight of Big Data applications for citizen tracking is needed.
Dominika Dziwisz, PhD, is an assistant professor in the Institute of Political Science and International Relations of the Jagiellonian University in Kraków. She also serves as a director of the BA Programme International Relations and Area Studies at the Faculty of International and Political Studies of the Jagiellonian University. She holds a master’s degree both in International Relations and Marketing and Management. Her PhD research was focused on cybersecurity policy in the USA. This topic, along with the relationship between Big Data and human rights, remains at the centre of her research interests.
All views expressed in this article are the views of the authors and not necessarily those of the Ratiu Forum or LSE IDEAS.