首页 News 正文
Hackers are using artificial intelligence (AI) and encryption in new ways to make cyberattacks more damaging, according to new research from Microsoft Corp.
Tom Burt, Microsoft's vice president of customer security and trust, said hackers are using AI tools that have been on the market for some time and generative AI chatbots that emerged last year to create more covert cyber attacks.
"Cybercriminals and nation states are using AI to improve the language they use in phishing attacks or the images they use in influence operations," he said.
At the same time, a new development in ransomware shows that hackers can encrypt data remotely, rather than in the hacked network, Microsoft said. By sending encrypted files to another computer, attackers leave less evidence behind, making it harder for the targeted business to recover. This technique was used in about 60 percent of the human-operated ransomware attacks Microsoft observed last year.
Against the backdrop of a surge in attacks, new AI and encryption tools used by hackers are making it harder for companies to defend their networks.
Microsoft researchers analyzed data generated from the 135 million devices the company manages for customers and the more than 300 hacking groups it tracks, and found that general data leakage attacks doubled between November 2022 and June 2023. In such attacks, hackers steal data and demand a ransom from the victim.
In addition, the firm said in a report released Thursday that human-operated ransomware attacks increased 200 percent between September 2022 and June 2023. Unlike automated ransomware attacks, human-operated ransomware attacks are customized.
Now that many companies have improved their ability to recover from the damage caused by ransomware itself, the way hackers make money is shifting to stealing data first, said Jake Williams, a member of the veteran network IANS Research and a former member of the National Security Agency's cyberattack team. And then blackmail the victims for a ransom. "There is no question that we are seeing more threat actors turn to extortion," he said.
Lane Bess, CEO of AI cybersecurity provider Deep Instinct, said tech and networking companies are quickly adding AI capabilities to their security tools, giving them a taste of their own medicine. "The fight has to escalate," Bess said Monday at the Wall Street Journal CIO Networking Summit.
Cisco Systems Inc. 's (CSCO) $28 billion acquisition of Splunk, announced in September, reflects a shift in the networking market that shows investment is flowing to companies focused on using AI to manage security and risk.
U.S. cybersecurity and national security officials have warned of the risk of hackers using powerful AI tools to infiltrate corporate and government systems, saying the U.S. government needs to develop AI technology to counter attacks from hostile foreign powers. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, said in April that the potential use of generative AI tools by cybercriminals and nation-state hackers was a significant threat, There are currently no legal safeguards to limit their use. Last month, tech executives including Elon Musk, Mark Zuckerberg and Bill Gates met behind closed doors with U.S. senators about AI and potential regulatory issues.
Lukasz Olejnik, an independent cybersecurity researcher and consultant, said hackers are using large language models like those in generative AI tools to speed up the generation of elements of a cyber attack, such as writing phishing emails or creating malware, making it easier to carry out a hack. To train extremely large models, large language models require huge amounts of data. "Some tasks that used to be done by teams can now be done by one person," he said.
Diego Souza, chief information security officer at manufacturing company Cummins (CMI), says he's seen a big increase in near-realistic phishing emails since generative tools, including OpenAI's ChatGPT, came out last year. Emails now mimic real companies and people, he says, and use more persuasive language than in the past. "I've seen some generative AI phishing that's just amazing," Souza said.
Microsoft found that cybercriminals can order underground phishing services for between $200 and $1,000 per month.
Burt said sophisticated hacking groups may start trying to use AI to improve on proven cyberattacks. Phishing aimed at breaking into password-protected accounts, as well as password spraying and brute force attacks, are still the most common ways hackers infiltrate corporate systems. "What [hackers] are looking for is: what's the cheapest way to break into a target?" 'he said.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

秋天的小熊诒 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    1