Growing underground market for rogue AI sparks cyber-security concerns

  • 📰 The Straits Times
  • ⏱ Reading Time:
  • 63 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 28%
  • Publisher: 63%

대한민국 뉴스 뉴스

대한민국 최근 뉴스,대한민국 헤드 라인

Despite fears, the same technology is being used by the cyber-security sector to combat scams.

SINGAPORE - As students and workers harness generative artificial intelligence tools for their studies and work, so, too, have crooks.

FraudGPT is marketed on the Dark Web as a tool to learn how to hack, and write malware and malicious code. WormGPT wasRoughly 13 per cent of phishing scams in 2023 analysed by CSA showed signs that they were likely made with AI. CSA wrote: “Threat actors can use AI to scrape social media profiles and public websites for personally identifiable information, thereby increasing the speed and scale of highly personalised social engineering attacks.”

Another way the technology is deployed maliciously is in the generation of deepfake images to bypass biometric authentication. For example, to beat the use of facial recognition as a security feature, fraudsters turn to face-swopping apps. Deepfake scams have come a long way since one of the earliest incidents surfaced in 2019, when The Wall Street Journal reported that a chief executive of a Britain-based energy firm wasDeepfakes have only grown more convincing since then, said CSA. It cited a case in 2024 where an employee from a multinational firm was tricked intoWith AI now driving cyber attacks, best to fight fire with fire

“Through machine learning and algorithms, AI can be trained to detect deepfakes, phishing e-mails and suspicious activities,” said CSA.

 

귀하의 의견에 감사드립니다. 귀하의 의견은 검토 후 게시됩니다.
이 소식을 빠르게 읽을 수 있도록 요약했습니다. 뉴스에 관심이 있으시면 여기에서 전문을 읽으실 수 있습니다. 더 많은 것을 읽으십시오:

 /  🏆 8. in KR

대한민국 최근 뉴스, 대한민국 헤드 라인