The rise of generative AI has triggered a mad dash among companies looking to leverage the capabilities of the hottest technology of our time. Although it’s reasonable to pursue the speed and efficiencies generative AI offers, it is unwise to fully embrace the technology without caution and consideration for potential risks.
Flaws introduced by AI-generated code can spread quickly. More than 70% of software applications scanned in the past 12 months contained security flaws, according to Veracode’s research. Machine learning models that are trained without supervision on a typical codebase will learn insecure coding practices and subsequently create security flaws.
This data can be leveraged while preserving code privacy, either by suppressing, masking or anonymizing sensitive data or via cryptographic techniques. It’s important not to train AI models on untested code in the wild. Look for ways to encrypt customer data in transit and at rest, and don’t use or retain customer data when training a model.Securing software has traditionally focused on testing and then manually fixing flaws.
With AI, developers can improve their secure code efficiency by shifting security left and adopting static analysis to identify vulnerabilities as developers write code., for example, can analyze source code for vulnerabilities and risks, enabling teams to address them early and allowing them to avoid time-consuming and costly problems later.