Dark Patterns and White Patterns in AI

By
  • Aiko Yamashita
Aug. 9 20233 min. read time
791004955

How do we know if we are contributing to a better world by using AI, or are we being part of the problem? By simply using the models as they come, without questioning how they are made (being passive users) or even worse, not verifying the quality of their output and spread false information will definitively make things worse.

Dark Patterns in AI:

We need to be aware that behind this wonder of technology as some people may describe it, it hides an obscure set of dark patterns.

We have come to know that for these models to actually work, the input data is taken from many sources on the internet, without consent of their owners, violating not only privacy laws but also intellectual/creative property of individuals, without any form of remuneration.

There is also an unavoidable process for creating these models, and that is of human, manual nature. These large input data sets to the models need to be categorized (or “tagged” as more commonly referred to) manually, and abusive (and many times extremely toxic) content needs to be filtered out. This work is often enough done by workers in data centres located in developing countries, where they do not count with minimum working conditions nor are given proper health support after been exposed over and over to traumatic content.

White Patterns in AI:

Luckily, there is progress in an area that I like to call “white patterns in AI”. These are organisations and their technologies that counter-fight misinformation, biases, author infringement, privacy infringement, and help to sustain human rights and planetary health.

I will give you some examples:

ChatGPT-Zero (https://gptzero.me) is an initiative to detect generative content, which can help to detect misinformation, students cheating in their exam, or websites that are generated for clickbait without or very little human intervention (see article here). Although these technologies should not be considered as the “final truth”, they are supportive tools for fighting data pollution on the internet, for example.

Another good example is haveibeentrained (https://haveibeentrained.com) created by artist and musician Holly Herndon, which helps individuals to find out if their personal or creative images have been used to train AI without their consent.

When we use an AI model or tool, we are assuming full responsibility for the processes and the data behind it, not only on how we use them, and there is increasing evidence that none of these models nor the data providers for those platforms have safeguards for protecting neither privacy nor authorship (see this article). Also, under the upcoming Åpenhetsloven, we are obliged to report on the working conditions of the people who have been involved in the production of a good or a service. With the current levels of transparency in the data lineage and supply chain of AI companies, this requirement is currently almost impossible to attain.

To really want leverage AI for the greater good, we probably will need to create our own models over which we have good governance and accountability. But there is an imperative to push for mechanisms that can withhold human rights and regulation on the production of these massive foundation models being created by BigTech.

We need to speak up and have open, public conversations about it, challenge how things are done, and work together on policies and solutions to make sure we don’t make things worse.

Organisations and technologies such as GPT-Zero will become increasingly important to fight AI dark patterns. AI is a tool, and ultimately is up to us people to decide how to develop them and how to use those tools for the benefit of everyone.

  • AI

© DNB

To dnb.no

Informasjonskapsler

DNB samler inn og analyserer data om din brukeratferd på våre nettsider.