Can AI be Weaponized for Cyberattacks?
Published on 09 Oct, 2020
Artificial Intelligence (AI) has proved to be a two-edged sword for cybersecurity. AI-enabled tools are being implemented by companies for building a strong security framework, but these are also being used by hackers to design advanced and precise cyberattacks. Could AI become a major threat to enterprises in future?
Cyberattacks have become ubiquitous and a major threat to all industries globally. It is a significant risk currently faced by all companies and government agencies. The new class of stealthy digital attackers are subtle, and they use the latest technology to steal or manipulate data, resulting in huge financial and reputational losses. Cybersecurity agencies are deploying innovative techniques and cutting-edge technology to boost the protective layer around sensitive data. However, cybercriminals are not far behind!
Security agencies have identified AI as one of the best-suited technologies to handle cyber threats due to its various capabilities such as:
- Identifying malicious malware
- Analyzing and interpreting user behavior
- Establishing user patterns
- Deducing anomalies or irregularities
Ironically, these features can also be used to design highly sophisticated cyberattack programs.
Can AI become a weapon of destruction?
AI can be used to make cyberattacks increasingly dangerous and obscure. With the help of AI, cyber criminals are now developing self-learning automated malware, ransomware, and phishing attacks, or they are able to do social engineering. Cybercriminals can implement AI, and consequently deep learning, to breach security systems. Following are some of the recent examples of AI-based attacks:
- AI can be used to superimpose an individual’s voice or face over another’s in a call or videoconference. This technique is called ‘Deepfakes,’ using which cybercriminals attacked a UK-based company. An AI-based software was used to impersonate the CEO’s voice on a call to convince employees to make a fraudulent transfer of almost USD243,000. As the CEO’s voice is available online, the hackers had easy access to it, and they were successful in their attempt.
- A prestigious online platform for freelance laborers was a victim of cybercrime, which affected approximately 3.75 million users. The hackers stole the registered Social Security Numbers and bank account details. An AI-enabled botnet was used to perform this huge attack directly on the servers. Due to the attack, the entire site had to be shut down until security could be restored and reinforced.
- A well-known blogging and hosting website faced an onslaught from botnets. The affected accounts were vulnerable and eventually gave hackers access to users’ personal information and financial details, such as credit card numbers or back account details.
- An AI-based tool was used to hack the accounts on a popular social media website. The users on the site were locked out of their social media profiles, as the hackers changed their passwords. Details revealed a bug in the code, which was responsible for the issue. The hacker was able to exploit this vulnerability and see users’ passwords in the URL of their browsers by using AI-enabled tools.
There are other advanced AI-based methods that will probably play a pivotal role in social engineering and social media cybercrime. Due to its ability to understand individual behavior and patterns, AI-based malware can impersonate a user. With the knowledge of users’ writing style taken from their social media and email accounts, the malware could craft credible messages that would appear to be genuine communications. Majority of the attacks occur via sending email attachments, as the target will not think twice before opening an email from a known entity and following instructions.
AI can also be used deviously to enter and move around a system, discretely. It can become a silent spectator and expertly spread within a digital environment, compromising connected devices. With its ability to analyze vast volumes of data rapidly, AI-based malware will be able to identify and quickly extract valuable information.
Although AI algorithms are complex, AI will soon be the favored tool of digital attackers. Organizations across the globe will have to become more alert and proactive. Technology research companies can help organizations in understanding:
- Preventive measures
- Actions to be taken in the event of such attacks
- Correct profile of employees who will take lead in such scenarios
- Solutions for such issues (available and under research)
AI is a brilliant technology, but its implementation will depend on its user. While cybersecurity agencies can make this technology a savior, digital attackers can easily weaponize it. AI’s ability to quickly learn and adapt will help it prosper in a new era of both cyberattack and cybersecurity.