Tips for writing efficient prompts
On this page
GenAI Toolkit is currently in private beta according to the Algolia Terms of Service (“Beta Services”).
This page lists recommendations and examples for writing better prompts to get the most out of the GenAI toolkit.
General tips for writing solid prompts
Prompt engineering is iterative. You should write a first version, test it on a few examples, identify edge-cases if any, and iterate on the prompt to cover those cases. For testing, you can use the Response Playground.
Start by defining the task you want the model to perform, for example:
- Summarize this user review
- Describe who this product is for
- Translate this product’s description into Spanish
You can then develop this into a full prompt. Be as precise as possible: if you can provide detailed and explicit instructions of the task to accomplish, you will get the best results.
When possible, provide examples of what you expect, or even of what’s a good or a bad result. This technique (few shot prompting) improves almost all generative AI use cases.
Good prompts are simple
Simplicity ensures that your prompts are logical and easy to maintain, allowing the model to focus effectively on the task without confusion or misinterpretation. Simple prompts are also more generic. They are easier to maintain by your team as your data and your goals evolve.
If your prompt is getting too complicated, try to:
- Use clear language: avoid jargon unless it’s necessary for the task at hand. Use straightforward, everyday language that the model can easily understand.
- Break complicated tasks into subtasks. For example, instead of “summarize the political strategy in this presidential discourse”, you could create the following subtasks:
- Create a first prompt: “extract a bullet-point list of the key policies in this discourse”
- Followed up with: “summarize the political strategy from these bullet-point policies”
Good prompts are short
Short prompts make better generative AI experiences because they are clear and focused.
They direct the LLM’s processing efficiently, avoiding unnecessary details that could complicate understanding or reduce accuracy.
This allows for faster, more precise responses that are directly tailored to the task at hand.
If your prompt is getting too long, try to:
-Focus on essential elements: remove unnecessary details as they might reduce the quality of answers.
- Use examples or templates: if similar prompts are available, use them as guides to structure your prompt effectively. This ensures consistency and accuracy across different scenarios.
- Test with shorter versions: experiment with condensing parts of the prompt during testing before committing to a more detailed version. This helps identify unnecessary parts of the prompt.
- Consider AI rephrasing: using a LLM to rephrase a prompt can sometimes make it more generic and compact.
Good prompts are specific
Specificity makes your prompts more efficient at doing one thing and doing it well. It ensures clarity and reduces ambiguity. Specific prompts make LLMs perform more accurately and efficiently for tasks like those in RAG APIs.
If your prompt is getting too ambiguous, try to:
- Narrow the scope: clearly define what you want the model to focus on. Instead of asking a broad question like “Tell me about the company,” specify which aspect of the company you’re interested in. For example, “Provide a summary of the company’s financial performance”.
- Provide context or constraints: offer additional context to guide the model’s response. For example, instead of asking “What are the benefits of this exercise plan?” you could say, “Explain the benefits of this exercise plan for stress management in people over 50.”
- Use explicit instructions: directly tell the model what you need. For example, “Summarize the following article in three bullet points,” or “Give me a list of five specific benefits of meditation for stress reduction.”
Concrete tips to improve your prompts
Be explicit about what you want
-
Don’t: keep the expectations implicit in your prompt. Don’t write “Describe who this product is for.”
-
Do: tell your expectations “Describe what kind of audience this product is for. Is it appropriate for new visitors, for power users, or for our longstanding members?”
Be specific about the expected response content
-
Don’t: keep the options implicit. Don’t write “Analyze the sentiment in these user reviews.”
-
Do: explain what output you accept. Write: “Analyze the sentiment in these user reviews. Return only a single label: either ‘positive’, ‘negative’, or ‘neutral’.”
Be specific about the expected response content
-
Don’t keep the structure you expect implicit. Don’t write “Generate 5 questions about this product.”
-
Do: describe what structure you need. Write: “Generate 5 questions about this product. Return a list of questions in XML, for example:
“What sizes are available? Is this jacket suitable for cold weather?
Give the model an escape hatch
If you include an escape option in your prompt, the LLM is more likely to answer the escape suggestion instead of hallucinating an answer.
- Don’t request a reply at any cost (unless this is what your UX needs). Don’t write: “Answer the user question as best as you can from the product data or your internal knowledge.”
- Do: offer an alternative (which could be to contact a human). Write: “Answer the user question as best as you can from the product data. If the context doesn’t allow you to answer with certainty, answer “I’m not sure I have the answer to this - it would be best to contact support@acme.com”.”
Further reading
For more prompting guides, see these references:
- PromptingGuide.AI by DAIR.AI
- Prompt engineering overview by Anthropic
- Prompt engineering by OpenAI