Toolkit
4.2 Data Ethics
This page of the Tools and Tutorials section of the Toolkit introduces three tools related to data ethics. The first tool is the Data Ethics Toolkit, which helps citizen and community science practitioners explore ethical considerations around data. It guides project leaders in identifying ethical issues and incorporating ethical principles into their projects. The second tool is the Ethics and Algorithms Toolkit, which helps organizations develop ethical algorithms and assess potential risks and biases. It provides tools and resources throughout the algorithm development lifecycle. The third tool is the FAIR Data Maturity Model, which helps to assess how FAIR the research data is by providing indicators and levels of FAIRness. It helps researchers, data publishers, and funding agencies to improve the accessibility and usability of their data. These tools aim to promote the responsible and trustworthy use of data in research projects.
Data Ethics Toolkit
|
The Data Ethics in the Participatory Sciences Toolkit is a resource created by and for citizen and community science practitioners to explore and identify ethical considerations around data. It has been designed to support the Citizen Science Association's capacity to promote a culture of data ethics in the participatory sciences. The toolkit focuses primarily on project leaders of top-down, institutionally driven projects and considers the obligations between project leaders, participants, and partners. It is designed to support ethical thinking, but with the understanding that a culture of ethical norms is necessary to promote ethical behavior. It guides project leaders to identify ethical issues and obligations and to learn how to use key ethical concepts and principles to make appropriate decisions for their projects. It should be noted that the toolkit is mostly US-focused, so you have to take into consideration that your projects may be subject to laws, regulations, and policies that are not covered in the toolkit. |
Why use this tool?
In general, data ethics is important because it helps ensure data is used responsibly and trustworthy. This includes considering the ethical implications of the collection, storage, and sharing of data and the potential impact of the data on the people who generate or are affected by the data. Data ethics helps to ensure that people's privacy and rights are respected when data is used, and that data is collected and used ethically and responsibly. It also assists in building trust between data users and the people whose data is used, and in ensuring that data is used for the benefit of everyone. The Data Ethics Toolkit is designed to assist you in recognizing and evaluating ethical considerations related to data generated especially by participatory science projects.
Using this toolkit, you will be able to make appropriate decisions for your project by incorporating ethical principles such as respect, reciprocity, transparency, and accountability. It emphasizes two main obligations: to the participants who make the research possible, and to the science itself. This toolkit aims to facilitate data governance that broadens decision-making and meets the needs of both participants and science.
To the tool: | https://citizenscience.org/data-ethics/ |
To the tutorial: |
Data Ethics for Practitioners |
Ethics and Algorithms Toolkit
The Ethics & Algorithm Toolkit, developed by the Center of Government Excellence (GovEx) at Johns Hopkins University, DataSF, the Civic Analytics Network at Harvard University, and Data Community DC, is an online platform to help organizations develop ethical algorithms. It provides a comprehensive set of tools, guidelines, resources, and templates to help organizations design, develop, and deploy ethical algorithms. It also entails a risk assessment framework to evaluate the potential for an algorithm to cause harm. The toolkit covers the entire lifecycle of ethical algorithm development and deployment, from initial design to actual implementation. GovEx has worked with leading organizations and experts in the field to ensure that the toolkit is comprehensive and up-to-date with best practices. The toolkit is designed to provide guidance and support to organizations seeking to develop ethical algorithms and help reduce the potential for harm. |
Why use this tool?
Algorithms and AI applications can have biases that lead to injustice and harm. For example, facial recognition algorithms have been found to be less accurate at identifying people from certain racial and ethnic backgrounds, leading to false positives. In addition, algorithms used to predict criminal recidivism are biased against certain racial and ethnic groups. Algorithms used to assess job applicants have also been found to discriminate against women. Thus, to minimize the risks associated with algorithms and AI applications, it is important to ensure that ethical considerations are taken into account in their development and use. This includes conducting a thorough risk assessment, ensuring that data is collected and used responsibly, and regularly testing and monitoring algorithms to ensure that they do not exhibit discriminatory or biased behavior. In addition, organizations should ensure that any decisions made by algorithms and AI applications are accountable and transparent, and that individuals' rights and privacy are respected.
It should also be noted that although researchers are not required to obtain explicit consent from individuals for the collection, processing, and use of personal data for research purposes that is not sensitive data, it has become a general ethical consensus in scientific practice to obtain informed consent in these cases as well. Although initially developed to support local government, the Ethics and Algorithm Toolkit can be adapted for use in the private sector and academia and is, therefore, a useful tool for assessing and minimizing risk when developing and using an algorithm or AI application in your research. On top of that, the Ethics and Algorithms Toolkit is open source and accessible for everyone.
To the tool: | https://ethicstoolkit.ai/#toolkit |
Additional resource: | "The promise and peril of algorithms in local government"
Interview with Andrew Nicklin and Miriam McKinney of GovEx |
FAIR Data Maturity Model
|
The RDA FAIR Data Maturity Model is a set of guidelines developed by the Research Data Alliance Foundation (RDA) to assess the FAIRness of research data by providing indicators, priorities, and maturity levels to measure how FAIR your data resources are. This model can be used by researchers, data publishers, project managers, and funding agencies to determine the level of FAIRness achieved by data resources and to identify where improvements can be made. Application of the model will lead to increased coherence and interoperability of existing or emerging FAIR assessment frameworks and will allow results to be compared in a meaningful way. In addition to the FAIR principles, which are provided in more detail in the "Principles and Guidelines" section of this toolkit, the FAIR Data Maturity Model provides an interactive spreadsheet that you can use to assess the FAIRness of your data easily. |
Why use this tool?
Keeping your data FAIR is very important because it allows data to be used more efficiently and effectively. FAIR data is more accurate, accessible, and reliable, which helps to ensure that data is used in the best possible way. With FAIR data, researchers, organizations, and businesses can make better decisions, gain insights, and create new products and services. FAIR data also helps to foster data sharing and collaboration, which are essential to drive research and innovation.
The tool is based on the four core principles of FAIR (Findable, Accessible, Interoperable, and Reusable) and assigns a level to each principle, ranging from 0 (not applicable) to 4 (fully implemented). To measure progress, each indicator is scored against five levels of compliance (0-4). Alternatively, the pass-or-fail approach measures FAIRness per area, taking into account the priorities, and gives a binary response for each of the indicators, with the level per area determined based on compliance with the priorities. Both approaches can be combined to take advantage of both.
After assigning the appropriate levels to each indicator in the second sheet of the spreadsheet, you will get a detailed graphical report of the FAIRness of your data in the first sheet. This will help you to see if your data is already FAIR or if you need to work on the FAIRness of your data - if possible.
To the tool: | https://www.rd-alliance.org/system/files/FAIR_evaluation_levels_v0.01.xlsx |
Additional resource: |
Webinar "RDA FAIR Data Maturity Model- Aligning International Initiatives for Promoting & Assessing FAIR Data" |