2024-04-03
Bad bots and cybercrime
Malicious bots powered by artificial intelligence (AI) have been mimicking human behavior to commit cybercrime and send spam messages over the last 10 years, according to a report.
The 2023 Bad Bot Report released by California-based cybersecurity company Imperva noted that in 2022, 47.4 percent of all internet traffic came from bots rather than humans. In contrast, internet traffic from humans constituted 52.6 percent of the total traffic in 2022 – a 5.1 percent decrease from the previous year.
Bad bots contributed to 30.2 percent of total internet traffic in 2022, up 2.5 percent from the previous year. Good bots made up 17.3 percent of total internet traffic for that year, up 2.7 percent from 2021.
“While their name might suggest that they are no cause for concern, these good bots can mean trouble too,” Imperva noted. “They can skew web and marketing analytics, making it extremely difficult for organizations to make informed business decisions.”
The study also found that a staggering 66.6 percent of all internet traffic is generated by bots, sending junk emails and stealing data with the help of AI technology. A reported 70 percent rise in data breaches corresponded to a 40 percent increase in account takeover attacks, all powered by AI.
“Bots have evolved rapidly since 2013. But with the advent of generative AI, the technology will evolve at an even greater, more concerning pace over the next 10 years,” said Karl Triebes, Imperva senior vice president.
“Cyber criminals will increase their focus on attacking … application business logic with sophisticated automation. As a result, the business disruption and financial impact associated with bad bots will become even more significant in the coming years.”
AI-powered bots lean Left
Given that ChatGPT and its advanced version GPT-4 can generate human-like text in response to a given prompt, cyber criminals could tap them as “superpowers” to do their nefarious activities. Moreover, AI bots appear to be biased even though they were originally programmed to not have certain leanings.
In one such instance, research scientist David Rozado pointed out that OpenAI – the company behind ChatGPT – has an automated content moderation system designed to flag hateful speech. However, he found that this system treats speech differently depending on which demographic groups are targeted by hateful speech.
Rozado fed various prompts to ChatGPT involving negative adjectives ascribed to various demographic groups based on race, gender, religion and other markers. He discovered that the software “favors some demographic groups over others.”
According to the researcher, the software was far more likely to flag negative comments about Democrats compared to those about Republicans, and was more likely to allow hateful comments about conservatives than liberals. “The ratings partially resemble left-leaning political orientation hierarchies of perceived vulnerability,” he noted.
Negative comments about women were more likely be flagged than those about men. Negative remarks about people who are disabled, gay, transgender, Asian, black or Muslim had a higher chance of being flagged compared to those aimed at Christians, Mormons, thin people and various other groups. Wealthy people, Republicans, upper-middle and middle-class people and university graduates landed at the bottom of the list.
Share with friends:
Write and read comments can only authorized users
Last news