NEW YORK – Artificial intelligence has become a game changer in many areas of our daily lives, including cybersecurity. With the rising use of new AI-driven tools like ChatGPT , the number of cyberattacks has doubled and became more sophisticated. Cybersecurity experts say that AI-powered cybersecurity tools could help protect your privacy in the new reality, but they are not a silver bullet.
“AI will not steal jobs from hackers, at least soon. Cybercriminals are keen users of AI-driven tools, but it’s about improvement, not replacement. Hackers learned how to use AI to increase the capacity of their work and make their job easier, quicker, and more effective. The utilization of AI tools has facilitated the automation of a significant portion of phishing attacks, and it is anticipated that the frequency of such attacks will escalate in the future, posing a significant cybersecurity threat,” says Marijus Briedis, CTO at NordVPN .
There are several ways how hackers use AI to increase the success rate of their cybersecurity attacks.
Tailoring spear-phishing attacks
The most common way cybercriminals use AI is to create personalized and convincing phishing attacks. Since AI can analyze vast amounts of publicly available data and better understand the target’s behavior and preferences, AI-generated personalized phishing emails can be highly effective at deceiving individuals. Moreover, public information is not the only thing that popular AI tools have at their disposal.
“As AI systems become more prevalent, there is an increased risk of mishandling or misusing sensitive data. For example, if an employee of a certain company uses an AI tool to write a report from confidential information, the same data later could be used to create so-called spear-phishing attacks that are highly tailored to individual targets, increasing the likelihood of success. Once you get a phishing email with information that is supposed to be confidential, there is a big chance that you will fall into the trap,” explains Briedis.
Modifying malware in real-time
AI tools help hackers automate tasks like reconnaissance and crafting custom malware, making their attacks more efficient, difficult to detect, and large-scale. For example, AI-powered bots can conduct automated brute-force attacks, leading to an increased volume of attacks.
“Hackers also use AI to enforce malware attacks to evade traditional cybersecurity defenses. By using AI algorithms, attackers modify malware in real-time to avoid detection by antivirus and other security tools. With this kind of automation, hackers are seriously challenging traditional cybersecurity tools and exploiting their vulnerabilities,” says Briedis.
How to mitigate cybersecurity risks posed by AI
While AI proved its effectiveness in improving cyberattacks, it could also be used to protect users, but it’s not a silver bullet. “Cybersecurity requires a multi-layered approach, including user education, regular software updates, strong passwords, and best security practices,” says Briedis.
Cybersecurity expert Marijus Briedis advises how to mitigate cybersecurity risks posed by AI-driven attacks:
Source: mitechnews.com