Bots – not all friendly automations looking to help
According to information published on the www.techradar.com page.
For many, the mention of bots conjures up images of friendly website automations desperate to provide answers. Subservient avatars programmed to make life easier.
However, for those in a specialist corner of cybersecurity, ‘How can I help you?’ is one small code change away from ‘How can I harm you?’ In the hands of unscrupulous individuals, bots are increasingly being used for malicious gain. Their target? Any brand transacting with customers using websites, APIs and mobile applications.
A uniquely exposed attack surface
Serving over 5bn people globally, if online commerce was a country, it would have the world’s third largest gross domestic product (GDP) at $6.3 trillion. These vast revenue flows are only made possible because online businesses automate customer interactions at massive scale. An untold number of checkouts, logins, data requests, product searches and more, are collectively powering the inexorable rise of digital businesses.
Unfortunately, threat actors have also noticed the value coursing through these interfaces.
Using malicious automation, threat actors compromise this exposed web attack surface. Attackers utilise bots with sophisticated custom-built capabilities, effectively equipping themselves with an army of fake website users capable of operating with extraordinary precision, speed, volume and stealth. This automated tooling enables attackers to corrupt underlying business logic, bleed money and scrape IP, which ultimately damages the reputation of the target business and degrades website performance.
A bot for all reasons
Threat actors harness bots to exercise a variety of attack techniques, amongst the most disruptive of which are scalping, credential stuffing and scraping.
Scalping – Attackers unleash bots to swarm digital shelves. Buying up in-demand items, such as event tickets and sneakers at lightning pace, they leave real customers standing – before listing them for resale en-masse at inflated prices on secondary markets.
Credential stuffing – This technique exploits the web attack surface with malicious automation to launch volumetric identity attacks for fraud. Attackers bombard interfaces with stolen or artificial credentials, ultimately gaining an illicit foothold in customer accounts or creating legions of fake identities for resale in dark areas of the Internet.
Scraping – Unique content, pricing and inventory data residing on the web attack surface is scraped and extracted wholesale by threat actors. With malicious automations harvesting IP for 4 months on average before detection, value is endlessly leached away.
Buckling under the weight of carrying these huge, automated volumetric attacks, websites slow to a crawl, piling lost customer and infrastructure costs on top of what has already been stolen.
The impact from malicious automation is cumulative. A gradual, parasitic, bleeding of financial, reputational and customer value that flies below the radar of traditional controls.
In all, this typically costs businesses $85.6m, every single year – dwarfing the average ransomware payment of $1.5m.
Very real human impact
The impact on people is similarly cumulative. Research has found that, at the mercy of the scarcity created by wholesale scalping attacks, people are willing to pay 13% more for goods and services – even when afraid of being ripped off.
The normalization of bots is also forcing some into questionable behaviors themselves. Over a quarter of under 35s admit to having rented one to secure the goods and services they want – despite knowing they are operating in questionable legal territory. A seemingly endless cycle of immoral behavior and fraud, reinforced by technology, and made easier by the distance of a keyboard.
A sophisticated fix for a sophisticated attack
The legalities of bots are confusing. Some, for example those which abuse stolen identities, are clearly illegal. Others operate in grey areas, for example being covered by website terms and conditions, but are not against the law.
Broadly, official policy is still playing catchup. Some regulation – such as the Better Online Ticket Sales (BOTS) Act in the USA and even EU laws attempting to mitigate and manage the harms of AI – tackle some concerns, but only provide partial coverage.
For the brands under attack, mitigating the threat of malicious automation means overcoming a number of technical issues. First, bot attacks span the entire web surface – requiring visibility of the huge volumes of traffic transiting websites, APIs and mobile applications. At large online brands, this makes it difficult to detect sophisticated bots that use an armory of disguises, pretending to be real users. Because of this, legacy technologies fail at scale, either denying access to genuine customers or allowing bots through unchecked.
Addressing the problem effectively requires regulation with teeth and technological innovation. Driven by growing consumer harm, forward looking politicians and lawmakers have realized the scale of impact and are starting to clamp down on perpetrators. Likewise, new technologies capable of intelligently detecting bots in huge datasets using machine learning are starting to gain trust with security teams.
However, what would compel action is greater awareness of the sheer magnitude of the problem. Bots are increasing exponentially in scale, speed and effectiveness – the question is, will we respond accordingly?
We’ve featured the best business VPN.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Source: https://www.techradar.com/pro/bots-not-all-friendly-automations-looking-to-help
Post Comment