<aside> 💡

The strongest argument we as civil society can use in the DPI debate is public trust. Any wide-spread success of DPI depends on it.

</aside>

For large parts of the population to accept any new digital system, they must first understand it and integrate it into their daily lives. Attention to risks, flaws and insecurities can severely undermine this trust. Trust is always hard-earned and easily lost.

Politicians who roll out DPI often want to be celebrated for it, portraying these projects as new, modern and beneficial for everyone. But any new system that impacts people’s lives and requires them to do things differently, will inevitably spark debate and lead to the formation of new habits and positions. In an open society, this can empower civil society to play an active role in the DPI debate.

Public scrutiny has to start at an early stage to question the goals that the project aims to achieve, who is involved in them and what commitments the government makes from the beginning. It is often beneficial for a civil society organisation to highlight the common risks associated with DPI early on and issue public statements that compare the proposed DPI with existing internationally recognised safeguards. For example, cybersecurity incidents and data breaches are almost inevitable and having already warned about these risks reinforces the organisation's credibility when they materialise.

Predictability

Predictability is an important concept for human interaction with technology. We typically have a set of assumptions about what will happen based on our actions, and we perceive technology that breaks these assumptions as flawed or unpredictable.

DPI usually promises a clear utility, aiming to improve existing actions from our analogue world. First, it’s important to assess whether the promised utility is actually delivered by the DPI and to identify who benefits from it. Often, the benefits are not equitably distributed throughout society and marginalised groups must be the focus of such assessments. Second, there is almost always additional functionality of the DPI that governments are pursuing. For example, predictions about parts of the population based on the data generated by digital processes, more granular control in government processes and resource allocations or statistical analysis of what happens in certain demographics.

<aside> <img src="/icons/checklist_green.svg" alt="/icons/checklist_green.svg" width="40px" />

There is also the element of predictability that concerns privacy. In the analogue or pre-DPI world we would not assume that…

</aside>

<aside> 📌

Trust in the DPI can be undermined if the system does not respect the predictions of the user and acts against their interests. A person will rightfully feel betrayed if their personal information is handled disrespectfully.

</aside>

The Three Layers of Privacy in DPI systems

Establishing trust in DPI systems

  1. Legal Framework - The laws that govern the DPI. This needs to include both a) privacy legislation and b) specific laws governing the DPI.
  2. Privacy-Preserving Architecture – The system should be built with a technical design that minimises risks. It needs to incorporate values that empower and protect the individual.
  3. Individual Rights and Redress Mechanisms – Mistakes will happen and when individuals are left alone with them, trust in the system is undermined.

<aside> <img src="/icons/arrow-right_gray.svg" alt="/icons/arrow-right_gray.svg" width="40px" />

Layer 2 is explained in Modul 2 & 4 about technical foundations and safeguards.

Layer 1 & 3 are detailed in Modul 3 & 5 about Data Protection Principles and Governance.

</aside>