Skip to main content

Blog

Reflection on Brussels Privacy Hub’s annual Summer Academy (From Data Access to Data Transformations: How to Govern Data in the Age of Analytics and AI?)

Posted on    by Admin
Blog

Reflection on Brussels Privacy Hub’s annual Summer Academy (From Data Access to Data Transformations: How to Govern Data in the Age of Analytics and AI?)

By 13 July 2023No Comments

A report by Weiwei Yi, CREATe PhD candidate

With the support of the law school and CREATe, I had the honour of participating in the Annual Summer Academy in Brussels, Belgium, held by Brussels Privacy Hub (BPH) under the topic of “From Data Access to Data Transformations: How to Govern Data in the Age of Analytics and AI?”(@BPrivacyHub) from the 19th to 23rd of June. The Brussels Privacy Hub is an academic research centre affiliated with the Vrije Universiteit Brussel, strategically located in Brussels to engage with EU policymakers, regulators, private sector, and NGOs, producing innovative research on data protection and privacy law, leveraging the city’s influence in setting global standards.

Pictures by Weiwei Yi

The Summer Academy commenced with a keynote speech by Leonardo Cervera Navas, the Director of the European Data Protection Supervisor (EDPS) office, who discussed the topic of “Data protection in the age of analytics and AI.” The speaker touched upon several key points, including the influential role of the EDPS’s opinion in providing policy advice to EU authorities, the importance of raising global awareness of data protection through EU initiatives and efforts, and the significance of data in the era of machine learning and the ideological tension between the “free flow of data” and “protection of fundamental rights.” Leonardo emphasized the new challenges and regulatory needs related to AI, such as discussions on digital ethics, particularly highlighting the need to develop “explainable AI” and the importance of a unified EU voice in AI regulation. Additionally, he stressed the need to reassess certain aspects of the GDPR, such as the purpose limitation principle, and underscored the strategic value of the forthcoming AI Act for the EU’s economy. At the end of the speech, Leonardo reiterated the principle of “data protection by design/default,” advocated for a human-centred approach to AI systems, and emphasized the significance of discussing key regulatory topics at the EU and international levels.(See the related publication here)

The following presentation by Gianclaudio Malgieri (Leiden University & VUB) a discussed the topic “Risk Assessment for Data Re-use”. Gianclaudio focused on Article 6(4) of GDPR and questioned the compatibility test for data reuse and data repurpose, particularly when consent is not the basis. His presentation centred on the interpretation of terms such as “link,” “context,” and “relationship,” emphasizing the crucial role played by “context” by quoting Helen Nissenbaum’s words on “Privacy as Contextual Integrity.” The speaker also questioned the “nature” of personal data as another barrier to data reuse, considering the dichotomy between sensitive and non-sensitive data, and reiterated the key role of context in defining data sensitivity. Moreover, Gianclaudio discussed the importance of proportionality and necessity in a democratic society, providing examples such as tax authorities reusing data collected for other purposes. To pursue a legitimate repurpose like this, he emphasized the need for safeguards and proportionate measures and briefly discussed the effectiveness of privacy-enhancing technologies (e.g., encryption, pseudonymization) in the context of Article 6(4). Finally, Gianclaudio broadened the topic by touching upon the supplementary framework of “data cooperatives” for data reuse, connecting it to the discussion of the Data Governance Act further elaborated by other speakers.

In the next part, Ronald Leenes (Tilburg University) discusses the topic “Where Data Meets AI: The Interplay between GDPR and the AI Act.” Ronald comprehensively compared the versions of the Artificial Intelligence Act and highlighted the need for EU legislative logic to continuously adapt to evolving AI technologies. He stressed the powerful governance of GDPR on AI due to its broad definition of personal data and the indispensability of data in model training. Comparatively, the speaker also raised questions about whether the risk-based approach and the principle of data minimization remain effective in the context of contemporary AI practices. Additionally, Ronald urged the academic community to prioritize issues of opacity, environmental impact, and automated decision-making. Lastly, he proposed shifting the constructional focus of the AI Act from regulating AI technology to protecting user rights and remedies.

The doubts about the data minimization principle were expressed, and a “pro-user” standpoint was also adopted by Hinda Haned. As an AI scientist and practitioner, she talked about algorithmic bias, highlighting its various sources and potential harm to data subjects. Hinda touched upon the challenges of achieving fairness in mathematical terms and implementing data minimization principles practically. Particularly, she discussed the struggle of aligning commercial development with profitability goals and data processing principles. In the end, Hinda provided recommendations for companies adopting AI technologies (e.g., sufficient stakeholder involvement, and AI ethics education within management).

António Biason (European Commission) opened the second day of the course with his talk on “The European strategy for data and the Data Act proposal”. The speaker gave an update on the progress of the Data Act legislative process at a macro level. Using the European Data Strategy set for 2020 as an index, he introduced four pillars leading to the construction and spoke at length about the vision of building several Common EU data spaces and creating an EU single market for data by connecting them through data intermediaries. António specifically drew the audience’s attention to the design of Chapter 2 (on B2B, and B2G on the sharing of IoT data) and Chapter 3 (on sharing rights and obligations and dispute resolution mechanisms). Comparatively, the speech of Johan Bodenkamp (European Commission Programme & Policy Officer) provided more detail on the administrative side of the story, showing the attendees how the EU data space works, including High-Value Datasets from public sectors such as health, agriculture, finance, and the organization of the Data Spaces. Johan also introduced the DSSC’s network of stakeholders, the community of practice, assets, and other components of the EU single data market.

The third day of the course adopted the perspective of global comparative law. Isabelle Servoz-Gallucci (Council of Europe) discussed the historical development of Convention 108, which was influenced by the right to privacy in the Universal Declaration of Human Rights and Article 8 of the European Convention on Human Rights. She highlighted the universal reference and significant impact of Convention 108 worldwide. In terms of the updates to Convention 108+, Isabelle emphasized that Convention 108+ is regarded as a global standard that promotes common values, allowing for the free flow of data while striking a balance between robust safeguards and flexibility to accommodate different national legal systems. Taking a regional point of view, Benjamin Wong (University of Singapore) brought overall insights about data reuse and sharing in Singapore in comparisons to the GDPR. For example, the speaker notes that Singapore’s standards are less strict than the EU regulations, with a focus on data sharing within the public sector for specific purposes. The data protection law structure of Singapore is similar to the GDPR but not rooted in constitutional provisions of human rights. Data reuse and sharing in Singapore require consent, but there are multiple exemption conditions for data reuse. Benjamin also discussed factors to consider in assessing the adverse effects of data reuse, including the impact on individuals, the nature of personal data, vulnerability, the extent of data reuse, and the potential impact of decisions.

When discussing “Data Access from Third Countries & Data Transfers” during the fourth day, Tania Schroeter (European Commission DG Justice) updated the audience on the progress on the e-evidence regulation and Directive package. She gave a brief overview of the two proposals and highlighted the newly introduced notification mechanism. Tania asserted that the EU e-evidence package is a new form of judicial cooperation, more suited to the logic of the internet, and has achieved a wide reach in terms of jurisdiction, departing from data storage location. At the same time, she suggested that some challenges in the implementation of the new law remain, including how to regulate the volume of orders, how to supervise non-cooperative service providers, and how to deal with conflicts of laws.

In the first half of the sessions, Christopher Kuner from VUB and Laura Drechsler from KU Leuven offered their opinions on the topic “Is data transfer regulation fit for purpose?”. The speakers acknowledge the successes of data transfer regulations, including GDPR, but also highlighted several problems. They emphasized the need to focus on the bigger picture and purpose of data transfer regulations rather than getting lost in detail. They particularly argued that the historical approach to data transfer regulations has been haphazard and influenced by political priorities. Following this thread, the speakers mentioned the prolonged focus on data transfers to the US, the failure to address data transfers to other major economies (e.g., China, Turkey, and Brazil), and an extensive emphasis on law enforcement issues while neglecting other important areas like humanitarian organizations, pandemics, and medical research. They emphasized the role of institutions, particularly the Commission, in handling data transfers and the need to consider both legal and political aspects, as well as the need to solve the lack of transparency and clear methodology in assessing the level of protection in different countries and mitigate delays and challenges in cross-border enforcement and decision-making. (See the related blog here)

On the last day of the academy, Sophie Stalla-Bourdillon (VUB) and Alfred Rossi from Immuta shared their knowledge about privacy-enhancing technologies under the title “On Unconditional PET Love & a Few Inconsistencies”. Sophie discussed various concepts and definitions related to anonymization and data protection. They also touched upon topics such as the interpretation of “identification”, “singling out”, and “inference” in the context of PETs. They also pointed out the paradox and trade-off between privacy/security and data efficacy. To help understand the discussion better, Alfred invited the audience to assess a case of data processing involving age verification and help to analyze the potential issues and threats associated with it. Through this interactive session, Alfred highlighted the importance of encryption and safeguards to protect privacy and prevent unauthorized access. He also emphasized the importance of “context” and provided his opinions regarding the strengths and weaknesses of different identity verification methods based on the specific circumstances and participants involved. (See the related document here)

Subsequently, Andrea Gadotti from Imperial College discussed the topic of “PETs & Machine Learning: An Impossible Union?”. Andrea addressed inference attacks and the importance of their mitigation using such techniques as differential privacy, which involves injecting noise to protect sensitive information while still allowing useful insights. On the other hand, he brought up concerns related to open AI and LLMs, especially in relation to the topics of user behaviour monitoring, telemetry data tracking, and disputes of intellectual property. He particularly mentioned the urgent need to enhance the “Machine unlearning” steps in model training while suggesting the implementation of filters or insulation against untrusted model providers.

Due to the detailed course schedule of this summer academy, the content of each lecture is worthy of further discussion. I had the opportunity to communicate with the speakers face to face, gain insights into the latest news and developments in the EU data and AI regulatory framework, and engage fully with practitioners and early career scholars from data protection agencies and various parts of the world, and I am grateful for the well-planned curriculum, as well as the support from CREATe and the law school of the University of Glasgow.