Quote from sachinm on 17 January 2024, 2:30 pmRisks to consider when using LLMs
There is a lot of initial hype around using LLMs for software development and use on infrastructure risk management, some more realistic than others. The conversation has shifted from expecting LLMs to replace software developers (i.e., artificial intelligence) to considering LLMs as partners and focusing on where to best apply them (i.e., augmented intelligence). These prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customise the outputs and interactions with an LLM. These prompt patterns, are like software patterns but focus on capturing reusable solutions to problems faced when interacting with LLMs.
As example of these prompts we can suggest potential use cases where LLMs can create advances in productivity for software engineering tasks, with manageable risks:
- fostering agile project management—Creating sprints to create a Minimum Viable Product between traditional Waterfall Stage Gates.
- analyse software lifecycle data—Software engineers must review and analyse many types of data in large project repositories, including requirements documents, software architecture and design documents, test plans and data, compliance documents, defect lists, and so on, and with many versions over the software lifecycle.
- analyse code—Software engineers using LLMs and prompt engineering patterns can interact with code in new ways to look for gaps or inconsistencies. With infrastructure-as-code (IaC) and code-as-data approaches, such as CodeQL, LLMs can help software engineers explore code in new ways that consider multiple sources (ranging from requirement specifications to documentation to code to test cases to infrastructure) and help find inconsistencies between these various sources.
- just-in-time developer feedback— Giving developers syntactic corrections as they write code also helps reduce time spent in code conformance checking.
- improved testing—Developers often shortcut the task of generating unit tests. The ability to easily generate meaningful test cases via AI-enabled tools can increase overall test effectiveness and coverage and consequently help improve system quality.
- software architecture development and analysis—Early adopters are already using design vocabulary-driven prompts to guide code generation using LLMs. Using multi-model inputs to communicate, analyse, or suggest snippets of software designs via images or diagrams with supporting text is an area of future research and can help augment the knowledge and impact of software architects.
- documentation—There are many applications of LLMs to document artifacts in the software development process, ranging from contracting language to regulatory requirements and inline comments of tricky code. When LLMs are given specific data, such as code, they can create cogent comments or documentation. The reverse is also true in that when LLMs are given multiple documents, people can query LLMs using prompt engineering to generate summaries or even answers to specific questions rapidly.
- programming language translation—Legacy software and brownfield development is the norm for many systems developed and sustained today. Organizations often explore language translation efforts when they need to modernize their systems. Portions of code can be translated to other programming languages using LLMs.
It is important to also understand the risks when using LLMs when blindly applying the output generated by LLMs without taking the time and effort to verify the results:
- data quality and bias—LLMs require enormous amounts of training data to learn language patterns, and their outputs are highly dependent on the data that they are trained on. Any issues that exist in the training data, such as biases and mistakes, will be amplified by LLMs.
- privacy and security—Privacy and security are key concerns in using LLMs. For example, Samsung workers recently admitted that they unwittingly disclosed confidential data and code to ChatGPT. Applying these open models in sensitive settings not only risks yielding faulty results, but also risks unknowingly releasing confidential information and propagating it to others.
- content ownership—LLMs are generated using content developed by others, which may contain proprietary information and content creators’ intellectual property. Training on such data using patterns in recommended output creates plagiarism concerns.
- carbon footprint—Vast amounts of computing power is required to train deep learning models, which is raising concerns about the impact on carbon footprint..
- explainability and unintended consequence—Explainability of deep learning and ML models is a general concern in AI, including (but not limited to) LLMs.
Source: "Application of Large Language Models (LLMs) in Software Engineering: Overblown Hype or Disruptive Change?
IPEK OZKAYA, ANITA CARLETON, JOHN E. ROBERT, AND DOUGLAS SCHMIDT (VANDERBILT UNIVERSITY)
OCTOBER 2, 2023,
There is a lot of initial hype around using LLMs for software development and use on infrastructure risk management, some more realistic than others. The conversation has shifted from expecting LLMs to replace software developers (i.e., artificial intelligence) to considering LLMs as partners and focusing on where to best apply them (i.e., augmented intelligence). These prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customise the outputs and interactions with an LLM. These prompt patterns, are like software patterns but focus on capturing reusable solutions to problems faced when interacting with LLMs.
As example of these prompts we can suggest potential use cases where LLMs can create advances in productivity for software engineering tasks, with manageable risks:
It is important to also understand the risks when using LLMs when blindly applying the output generated by LLMs without taking the time and effort to verify the results:
Source: "Application of Large Language Models (LLMs) in Software Engineering: Overblown Hype or Disruptive Change?
IPEK OZKAYA, ANITA CARLETON, JOHN E. ROBERT, AND DOUGLAS SCHMIDT (VANDERBILT UNIVERSITY)
OCTOBER 2, 2023,
This website uses cookies to ensure you get the best experience on our website.