top of page

DUET Launches Ethical Principles for Using Data-Driven Decision in the Cloud

Key Points

  • Local Digital Twins (LDTs) use personal and non-personal data in order to improve urban operations, the environment and economic outcomes within cities.

  • Despite allowing for increased efficiency and forward city planning, the use of personal data can give rise to exploitation of the rights and interests of individuals, resulting in harm on a personal level.

  • Therefore, with the support of DUET LDT pilots (Athens, Pilsen, Flanders) who provided a range of findings and real-life data-supported policy case studies, the legal (GSL), management (AIV/DV) and a technical consortium partner (IMEC) drafted a guide on ethical principles for using data-driven decisions in the cloud.

  • The aim of this guide is to provide future LDTs support in making ethically aware, compliant and legal decisions in regards to data processing.


Image courtesy: Pixabay



Abstract

This deliverable seeks to provide the final version of and Ethical Code of Conduct tailored to assist in any data-based decision-making process. DUET LDT pilots undertook interviews with GSL to discuss their usage of the first iteration of the deliverable (D1.5), improvements in the structure and user experience of the guidance, and the evolving nature of use cases. As the second and final iteration, the guidance also includes new user types and the ethical considerations to discuss, including entrepreneur/founder, smart city provider roles, and an extension of a public servant/city official role.


In particular, the guidance discusses the building blocks of the ethical discourse around data-based decision making, and suggests an ethical code of conduct (ethical principles) for LDTs in such a context.



Recommendations

  • Ethical considerations should be read alongside data protection and privacy aspects. In particular, LDT pilots should be aware of steps to take to ensure a privacy-by design approach, including the data minimisation principle.

  • Anonymisation or avoidance of personal data is preferable unless it is strictly necessary for your task and proportionate to meeting the pre-defined purpose of your activity. In cases where personal data must be used and anonymisation would make the purpose of the use case futile, the guidance highlights techniques and procedures to be avoided in such a case such as singling out data, aggregated records and personal data by inference.

  • Consider the interconnected nature of data storage. Despite the focus of DUET on LDTs, use cases should consider the possibility and necessary safeguards if personal data must be stored outside of Europe.


The final thirteen ethical principles can be found below. For the complete context and further guidance for smart cities, reference to the full deliverable is advisable and can be found below.



 

1. Accountability and data sovereignty

  1. Know the origin of the data, its lawful and ethical uses, and any limitations on their sharing or publication.

  2. This includes understanding the origin of data when working with private/public data sources as expressed by the European Union Agency for Cybersecurity (ENISA).

  3. This also includes understanding all possible locations of processing of personal data and the different data regulations this may be subject to.


2. Transparency

  1. You should know what data you collect and for what purposes.

  2. The data subjects (e.g. the citizens) should know what data you collect about them and for what purposes.

  3. Be transparent about the scope and source of the data, as well as the limitations of the data. Explain what information the data contains, how (and where) it was collected, whether it is static data, updated regularly, or real-time.

  4. If the data is publicly available, provide a link to the origin data repository/source url.

  5. Make sure that decision makers are aware of the deficiencies/limitations of the data.

  6. Promote knowledge of utilisable data/models within your organisation so that employees are aware that helpful data or applications may be available to carry out their tasks.


3. Data quality

  1. Get the best data as you can for your purposes. Best may mean:

    1. data most suited for your purpose;

    2. most complete, correct, and up-to-date data (clean data);

    3. data with a transparent track record of their collection, storage, and the log of previous processing;

    4. data with a clear licence to (further) use.

  2. Take active steps to ensure and maximise the quality, objectivity, usefulness, integrity and security of data.


4. Data quality for publication

  1. If the data is sufficient for an internal use (within the services of the city), it is typically equally good for making the data publicly accessible (open).

  2. Use open standards and open licences.

  3. Publish / share data only after you have cleared the applicable legal requirements.


5. Data security

  1. The integrity and security of data should be maximised.

  2. Use trusted third-party services providers (e.g., approved by the future European Union Cybersecurity Certification Scheme on Cloud Services (EUCS)).


6. Data everywhere

  1. Promote the use of data in public interest, be active in seeking out data that may be (re)used in public interest.

  2. Actively explore the ways in which data can be obtained from partners (private or public) with whom you engaged in a joint activity (e.g., public procurement).


7. Transparent and fair use of AI and computer models. Fighting the “opacity” problem.

  1. Cities should strive to develop the officials’ ability to understand, interpret and use automated decision-making systems. They should understand at least the basics of the underlying algorithms and the data used. This can be achieved by a targeted education and training, for example.

  2. Data subjects (citizens) should be informed about the fact that automated decisions are being taken about them and with the help of their data. To the extent possible, cities should strive to make sure that data subjects also understand the underlying algorithms, to the extent practicable.

  3. Algorithms and automated decisions should be fair and proportional. They should not prejudice the data subjects. Even though some bias may be inherent in data, the algorithms and the data they use (or train on) should not create or perpetuate material biases (racial, ethnical, sexual, political, religious, etc.)

  4. Ensure an element of human control over the AI:

    1. Individuals to whom human oversight is assigned should fully understand the capacities and limitations of the AI system and should be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible.

    2. Data subjects should be granted the right to appeal relating to data processing and the automated decisions that affect them.


8. Presentation of data or results

  1. The way data or data-based decisions are presented should avoid creating or perpetuating bias (e.g., the use of red and green color coding for visualisations).


9. Data ownership and management

  1. Data ownership typically goes hand in hand with the responsibility for data management.

  2. Third parties contracted out for city data management should be chosen responsibly, adequate data processing agreements should be put in place.

  3. Smart cities should understand if their data is public or private when acquiring a data set from a third party source, and the limitations on usage.


10. Privacy-by-design

  1. Comply with all legal requirements when acquiring, using, or publishing personal data. (see also D1.2 Cities Guide to Legal Compliance for Data-Driven Decision Making).

  2. If you come across a personal data breach, report it to your Data Protection Officer.

  3. Minimise the amount of personal data obtained, used and stored.


11. Anonymised data preference

  1. Do not use personal data unless it is strictly necessary for your task and proportionate to meeting the pre-defined purpose of your activity.

  2. If anonymous data is not available, but personal data is, ensure that the data is anonymised before its further use, if possible. Ask the upstream data provider, who best understands the data, to anonymise the data before it is supplied.

  3. Non-anonymised data should in no case be made public (or open data), unless strictly required for carrying out the task in question, and unless cleared by the Data Protection Officer for publication.

  4. Where data is anonymised, do not proactively take any steps in the direction to re-identify the data (link the data to individual persons). The following techniques and procedures, for example, should be avoided unless the goal is actually to re-identify otherwise anonymous or pseudonymised data:

    1. Singling out, which corresponds to the possibility to isolate some or all records which identify an individual in the dataset;

    2. Linkability, which is the ability to link, at least, two records concerning the same data subject or a group of data subjects (either in the same database or in two different databases). If an attacker can establish (e.g. by means of correlation analysis) that two records are assigned to a same group of individuals but cannot single out individuals in this group, the technique provides resistance against “singling out” but not against linkability; or

    3. Inference, which is the possibility to deduce, with significant probability, the value of an attribute from the values of a set of other attributes.

  5. If the risk of re-identification materialises on a given dataset, take all reasonable steps, seek appropriate expert advice and apply all relevant professional standards in order to mitigate the risk of a privacy breach and further unlawful personal data processing.


12. Training and sufficient data usage information

  1. Ensure to provide sufficient information about the application including how it works and the data the model is sourcing from.

  2. If applicable, provide a contact to the application administrator for possible troubleshooting.

  3. Ensure all people involved have an understanding of these ethical principles.

54 views0 comments
bottom of page