If you ask ChatGPT what prompt engineering is, it gives a great answer. But it assumes a prompt is like a Google search. A one-off. But when we embed prompts inside apps so that they can be reused, this scope of prompting needs to be expanded and considered in the context of a software delivery lifecycle. So, ChatGPT’s result is not complete.
Prompt engineering is a crucial skill when working with large language models, as it determines how well the model can understand and respond to user inputs. It requires a combination of linguistic expertise, domain knowledge and an understanding of how the specific model operates.If you've written code, then you know exactly what the answer is going to be. The compiler doesn't change. When I write a prompt, I'm hitting a large language model that comes back with a result.
You then build the prompt. This is quite iterative. You write, you test. Simply reversing the position of two words in the prompt gives dramatically different results. We've discovered that the longer and more detailed the prompt, the better the answer. It is better—and quicker—to actually write a far tighter prompt. There's a whole skill set to writing prompts that get great results.
Then, we need to think about how we manage versions of prompts. We need a repository of prompts that are shareable assets, rather like we do email templates. We need to be able to share them, version them and iterate them to make them better and better.Our Product Management team has built a series of prompt templates that automatically build our internal website about all new product features that are coming out.