Malware, Security Program Controls/Technologies

Malware risk inaccurately classified by AI, LLMs

Share
Artificial intelligence and large language models continue to be lacking in analyzing malware, with malware risk accurately classified by LLMs in only about 5% of cases, SiliconAngle reports. No calls to sensitive APIs were found in the code base of 45% of applications but in only 5% of apps if dependencies are considered, suggesting that lack of API analysis has prompted an underestimation of security risks, according to a report from Endor Labs' Station9 research team. Moreover, open-source components accounted for 71% of Java application codes although only 12% of imported code is being used by such apps. "The fact that there's been such a rapid expansion of new technologies related to artificial intelligence and that these capabilities are being integrated into so many other applications is truly remarkable but it's equally important to monitor the risks they bring with them. These advances can cause considerable harm if the packages selected introduce malware and other risks to the software supply chain," said Endor Labs Station9 Lead Security Researcher Henrik Plate.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.