Six menacing threats you need to know about Artificial Intelligence (AI)
The artificial intelligence (AI) market in South Africa is expected to achieve a valuation of $2.4 billion by the end of this year (https://apo-opa.co/481NZ3n), with a projected annual growth rate of 21% from now until 2030. On a local level, the technology can reduce security threats, improve judgment, deal with legacy issues, and have a big beneficial social impact. Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 AFRICA (https://www.KnowBe4.com), cautions against taking unnecessary risks in spite of the remarkable applications and implications.
According to her explanation, “Generative AI models are trained on data from various sources,” emphasizing that these sources are not all regulated, lacking necessary context, and not all verified. “AI is quite useful for managing the repetitive administrative duties related to statistics and spreadsheets. But when we use data to guide decisions that could affect people’s lives, that’s when it gets worrying.
Artificial Intelligence is a machine learning concept based on human creativity and frequently erroneous and biased data. As noted by Microsoft researcher and University of Southern California professor Kate Crawford (https://apo-opa.co/3GwCPYK), artificial intelligence is not really artificial or intelligent. If people are not aware of the risks, they may have long-term effects,” says Collard.
The following six dangers are the most alarming:
01: AI delusions: A New York lawyer conducted legal study using a conversational chatbot earlier this year. Six fake precedents were deceitfully added to his file by the AI, who misattributed them to well-known legal databases. An AI hallucination is exemplified by this example (https://apo-opa.co/3uJjW2e), where the result is either absurd or phony. These instances occur when the AI’s training data is not used to generate the prompts, leading the model to reply by delusions or contradictions.
02: Fake deepfakes Fake photos have consequences in a lot of different sectors. The variety of potential misuses for AI-generated photos is growing, with the surge in revenge porn, created employees, and phony identities. Using random input, a specific kind of deep neural network known as the Generative Adversarial Network (GAN) (https://apo-opa.co/481j1Zi) can create fresh data and extremely realistic visuals. With the use of this technology, one can now create deepfakes—a type of picture, audio, or video manipulation in which complex generative techniques are used to change facial features. When used in political efforts for divisiveness, disinformation, or persuasion, this type of digital puppetry has serious repercussions.
03: Automated and more potent attacks: As hackers employ deepfakes in increasingly complex attacks, this directly leverages the potential of GAN previously discussed. They employ it in impersonation attacks, in which victims are tricked into paying or complying with other fraudulent instructions by using phony voices or even videos of the victims. Hackers can also profit from jailbroken generative AI models, which enable them to streamline or automate their attach techniques. For instance, they can use them to create phishing emails automatically.
The fourth idea is the “media equation theory,” which states that people tend to identify human traits in computers and feel empathy for them. When people interact with robots that appear intelligent, this inclination just gets stronger. This presents a risk even if it can enhance user support and participation in the service industry. This over-trust effect makes people more susceptible to social engineering, persuasion, and manipulation. More often than not, they follow and believe machines. According to research, humans are prone to change their answers to questions in order to follow recommendations from robots (https://apo-opa.co/3RvGVqg).
05: The manipulation problem: Artificial intelligence (AI) can respond to and mimic emotions through the use of natural language processing, machine learning, and algorithmic analyses. Agenda-driven AI chatbots, for example, can obtain information from multiple sources and use it to accomplish certain goals, including manipulation or persuasion, in real time by reacting quickly to sensory input. These features provide predatory content, false information, disinformation, and frauds a platform to spread.
06: Matters of ethics: Ethical difficulties are raised by bias in the data as well as the lack of regulations that currently govern AI development, data use, and implementation. Global initiatives are being made to address the ethical dilemma presented by AI and lower the possibility of AI poisoning, which involves the manipulation of data to introduce biases or weaknesses. But there isn’t much of a push in South Africa right now to solve these problems. This needs to change because it’s critical to manage and identify the risk of contaminated AI data before it causes long-term harm. Collard says.
We should exercise caution when disclosing personal information to AI chatbots and virtual assistants. Collard says, “We should always be curious about how and by whom our data is being used. Sharing private and confidential company information with data training models carries a risk. AI is a useful tool, but it must be used carefully and thoughtfully. It should only be relied upon when shown to be accurate and offers the greatest benefit.