Generative AI as a Vector for Harassment and Harm
2023
Generative AI as a Vector for Harassment and Harm
Generative AI technologies have the potential to bring about incredible advancements and benefits to society. However, it is crucial to consider the risks and negative impacts of their use. In this article, let's try to understand how generative AI can be used to cause harm and explores various types of harm that may arise from its misuse.
Generative AI is a technology that uses algorithms to create new and original content, such as images, videos, music, and text. It learns from large amounts of data to understand patterns and styles, enabling it to generate new content that fits within what it has learned. While this technology is exciting and can produce novel and interesting variations, reflecting on its ethical and responsible use is essential.
Several generative AI systems have gained popularity, such as ChatGPT, Bing AI, Google Bard, and Snap's My AI. However, the focus should not solely be on the shiny new objects but on understanding the potential risks and taking measures to minimise them. The promise of generative AI to improve various aspects of our lives depends on using it for good rather than causing harm.
Different types of harm can be perpetrated using generative AI technologies. These include harassment, cyberbullying, hate speech, deepfakes, catfishing, sextortion, doxing, privacy violations, and identity theft/fraud. This is not an exhaustive list, as disinformation and fake news are other potential problems that may arise with this technology.
The first type of harm is harassment and cyberbullying. Generative AI enables the automatic creation and rapid dissemination of harassing or threatening messages, posts, or comments on various platforms. This can harm the targeted individuals significantly, creating a hostile online environment and causing psychological and emotional distress. The technology can also analyse personal information to generate highly specific and threatening content, making the harassment more personal and intimidating.
Hate speech is another area where generative AI can be misused. It can create and propagate large volumes of hate speech, both in general and targeted ways, amplifying the impact and visibility of such content. Biases present in the training data or model design can generate offensive and harmful narratives, reinforcing stereotypes and marginalising certain communities.
Deepfakes, which involve the algorithmic alteration of images or videos to deceive or manipulate, are a significant concern. Generative AI can be used to create convincing deepfakes that can be used for harassment, revenge porn, or defamation. These can cause significant emotional and psychological harm, damage personal relationships, and have long-lasting consequences for the victims.
Catfishing, the creation of fake online identities to deceive others, can also be facilitated by generative AI. The technology can generate realistic profiles and simulate dialogue to build emotional connections with targets. This can lead individuals to believe they are interacting with a genuine person, resulting in emotional manipulation and potential harm.
Sextortion, a form of online blackmail, can be facilitated by generative AI and deepfakes. Highly realistic and deceptive content can be used to coerce individuals into providing explicit images or engaging in sexual activities under the threat of having their intimate content exposed without consent.
Finally, we should not forget the potential for generative AI to be used in doxing, where personal information about individuals is compiled and publicly disseminated without permission. The web crawling and learning capabilities of generative AI can make it easier for aggressors to gather and weaponise personal data.
In summary, while generative AI technologies offer tremendous potential, it is crucial to consider and address the potential risks and negative impacts of their misuse.