Go agentic with OutSystems AI Agent Builder
5 GenAI coding assistant risks: Careful what you prompt for
Forsyth Alexander August 02, 2024 • 7 min read
Subscribe to the blog
By providing my email address, I agree to receive alerts and news about the OutSystems blog and new blog posts. What does this mean to you?
Your information will not be shared with any third parties and will be used in accordance with OutSystems privacy policy. You may manage your subscriptions or opt out at any time.
Get the latest low-code content right in your inbox.
Subscription Sucessful
Coding assistants like GitHub Copilot and ChatGPT that are powered by generative artificial intelligence (GenAI) are popular with developers. According to a survey by Sourcegraph, 95% of developers use these assistants to write code, mostly at work. This widespread adoption is not surprising, given that GenAI assistants can help developers deliver code faster, potentially increasing output and allowing teams to meet tight deadlines more easily.
Using these assistants has its risks, however, and in this blog, I cover the 5 biggest. The good news is that you can avoid these risks when you foster a culture of responsible GenAI usage supported by best practices and a platform designed to provide governance and guardrails. This blog also explains more about this culture and how low-code can help.
The risks of using a GenAI-powered coding assistant without guardrails
When you use a generative AI coding assistant without guardrails, there are potential pitfalls that can affect the reliability, security, and ethics of your apps. Here are the 5 biggest risks.
Risk #1. Insecure code and other threats
One of the most significant concerns about the code produced from ChatGPT prompts (and other assistants) is its effect on application security and data privacy—and how it opens the door to new, sophisticated cyberthreats.
Insecure code
Several studies have painted a concerning picture of the security risks associated with AI coding assistants. University of Quebec researchers asked ChatGPT 4.0 to generate 21 different programs and applications in a variety of programming languages. While the generated output worked as intended, only five were secure. Yet, a Sauce Labs survey reveals that 61% of developers admit to using untested code generated by ChatGPT, with 28% doing so regularly. That means the code they generate can easily replicate security issues from user codebases and open-source projects and no one is the wiser until it’s too late.
Data privacy
GenAI models are trained on massive datasets, many of which have sensitive or personal information. This can lead to privacy breaches and legal liabilities. For example, a GenAI model used to build an application for a financial institution could be trained on customer data that includes names, account numbers, and transaction histories. A generative AI assistant could deliver code from those models that makes sensitive information available or unintentionally reveals patterns in the data that could be used to identify individual customers and their financial behavior.
Cyberthreats
The models used by GenAI assistants are vulnerable to black hats. For example, hackers have been able to manage input data during model training to generate misleading or harmful outputs. They also use an attack technique where ChatGPT generates questionable code snippets as fixes to common vulnerabilities and exposures, and even offers links to nonexistent coding libraries. Hackers then hijack these nonexistent coding libraries by publishing a malicious package in their place.
Risk #2. Complexity on both sides of the code
The nuanced nature of software development, with its need for creativity, contextual understanding, and logical reasoning, remains a significant challenge for AI.
A lack of understanding
While GenAI assistants easily handle simple, repetitive tasks, software development often involves complex problem-solving. However, these assistants struggle with understanding complex codebases, recognizing intricate patterns, and detecting sophisticated issues and vulnerabilities. Therefore, the code they generate is not likely to scale or serve enterprises well.
A black box
The code produced by GenAI assistants is a black box, so you don’t know where the code is coming from or what’s in it. Also, it doesn’t know your infrastructure—your systems of record, your databases, or how everything is integrated. GenAI-produced code will not have information about the modifications or extensions relevant to your landscape. It can point you into a direction, yes, but you need specific context for your infrastructure.
Risk #3. The overproduction problem
The models used for GenAI are huge collections of data designed for predictive analytics. Therefore, a GenAI copilot predicts code based on inputs–even if it’s twice as long as needed. The result? An overproduction of code, which raises several concerns. For one, copilots and other GenAI coding assistants can make up variables, method names, and function calls, as well as hallucinate fields that don’t exist.
Plus, developers who use copilots say that they deliver code that is up to 50% longer than it would be if written by hand. Unnecessary code can lead to less maintainable and less efficient codebases, driving up technical debt.
Risk #4: Knowing right (and rights) from wrong
ChatGPT and other prompt-based GenAI options can generate content that closely resembles existing work, including code. This affects intellectual property rights and copyright infringement. If your application generates content that is too similar to another company’s software or copyrighted material, legal challenges and financial penalties could be on your horizon.
Then there are the ethics risks. If an AI model is trained on data with biases, such as gender, racial, or cultural stereotypes, it may generate content that perpetuates them. This can lead to discriminatory or offensive output that can harm your users and damage your brand. An example is a tech giant that built in-house AI software for job applicant screening. The software discarded any engineering applicant who went to an all-women’s university because the model had been trained on the resumes of its all-male team. The giant had to scrap the screening software.
Risk #5 The inhuman factor
Despite the advances in AI, the human element remains crucial to software development. Yet, when developers use tools like Github Copilot or Chat GPT, there's little governance–and there often isn’t a human to check the logic or soundness of the code they're implementing, This increases the likelihood of something going wrong–from unnecessary code slowing performance similar to a distributed denial of service attack to untested code creating a major security breach.
Software development needs humans that can keep an eye on it, and GenAI, no matter how it might appear that it’s talking to you, is not human. However, with the right culture, you can have your GenAI-produced code and support it with the human factor and other development tools, too.
How to bypass the risks of GenAI coding assistants
Fostering a culture of responsible AI use enables your developers to generate code with a virtual assistant and deploy it more widely and securely. This involves providing guidelines for appropriate AI use, educating developers on its limitations, and promoting critical thinking skills. Implementing robust review processes, prioritizing clean code practices, conducting regular security audits, and providing secure coding training for developers are also part of this culture.
You should also invest in continuous learning for your development teams, implement AI governance policies, and use hybrid approaches that combine AI capabilities with human expertise. Consider a platform that enables you to take GenAI-produced code and deploy it without fear of issues. That way, you can maintain high quality standards and a strong security posture, while avoiding potential vulnerabilities introduced by using a generative AI coding assistant or copilot.
GenAI and low-code: The best of both worlds
While ChatGPT, Github Copilot, and other coding assistants can significantly enhance productivity and innovation, they should be treated as tools that augment human expertise rather than replace it. The key to integrating AI into software development successfully and avoiding its risks lies in striking the right balance between using its capabilities and maintaining human oversight.
This is where OutSystems comes in. With GenAI built into Mentor on Outsystems Developer Cloud, for example, you can benefit from speed while minimizing its risks. OutSystems Mentor includes an App Editor feature that uses natural language to suggest improvements the application you created from prompts and requirements documents. Capabilities for validation, deployment, and monitoring make sure that applications remain functional and can be easily updated.
The OutSystems low-code platform also automatically checks AI-generated code for vulnerabilities and ensures it meets required standards. It also helps control app creation by non-developers, preventing unauthorized or duplicate applications. Learn more by visiting our generative AI web page.
Taking the leap forward with eyes wide open
Assistants that use GenAI to produce code represent a significant leap forward in software development, offering the potential for increased productivity and innovation. However, they also introduce new risks and challenges. Fortunately, with the right strategy and the OutSystems platform, you can get all the benefits of generative AI while maintaining the security and quality of your software.
Ready to use generative AI to produce code and integrate it in your business? Watch our on-demand webinar, Building generative AI solutions: A deep dive into technical practices.
Forsyth Alexander
Since she first used a green screen centuries ago, Forsyth has been fascinated by computers, IT, programming, and developers. In her current role in product marketing, she gets to spread the word about the amazing, cutting-edge teams and innovations behind the OutSystems platform.
See All Posts From this authorRelated posts
Takin Babaei
July 31, 2024 6 min read
Tiago Azevedo
July 05, 2024 5 min read
Forsyth Alexander
July 25, 2025 6 min read