Prompting 101
Course link: https://learn.deeplearning.ai/courses/chatgpt-prompt-eng/lesson/1/introduction
Contents
- Best practices
- Common use cases
Types of LLMs
Base: Predicts next work based on text training data
Instruction tuned
- Follows instructions; fine tuned on instructions.
- Trained on inputs and outputs
- Uses RLHF - Reinforcement Learning with Human Feedback
- Helpful, Honest, Harmless
- Recommended for most practical use cases
- While using, think of giving instructions to another human
- Quantity of information tailored to the kind of response expected is optimum
- Be clear and specific
Guidelines for prompting - Principals and Tactics
- Write clear and specific instructions
- Doesn’t mean it has to be short - longer prompts can be better and provide more insights.
Tactics
- Use delimiters to clearly indicate distinct parts of the input
- Delimiters also help with avoiding prompt injection
- Ask for a structured output
- Provide an output format where feasible
- Ask model to check whether conditions are satisfied
- Conditional prompt (if..else)
- Any assumptions can yield wrong outcome
- Exit early after checking conditions
- Few-shot prompting
- Providing successful examples of (part of) tasks to be performed - “What does success look like?”
- Give model time to ‘think’
- Complex tasks can take a long time/computation. Tell the model to take more time to get an answer
- Use delimiters to clearly indicate distinct parts of the input
Tactics
- Specify steps required to complete task
- Ask for output in specific format
- Instruct the model to work out it’s own solution before rushing to solution
- Ask to do it’s own work, then compare and evaluate - “Do not decide if solution is correct until you’ve done the problem yourself”.
- Specify steps required to complete task
Model limitations
- Hallucinations - making statements that sound plausible but aren’t true
- Known weakness of models at current time
- Hallucinations - making statements that sound plausible but aren’t true
Iterative prompt development
- First prompt to solve a problem rarely works the first time
- Iterate and get closer to the desired result
- Refine with a batch of examples
- Be precise and clear
- Giving a role and task can help
Common use cases
- Summarizing text
- Giving purpose helps generate better results (more context)
- Limit by sentences/words.
- Doesn’t always adhere to provided limit
- Character limiting rarely works due to tokenization mechanism
- Inferring
- Making sense of sentiment - whether something is positive or negative
- LLMs are good at extracting information from a info source
- “Zero-shot learning”
- Transforming
- Expanding
- Temperature
- Lower temperature (0), more reliability, predictability
- Higher temp yields more variety (randomness, creativity)
- Temperature
- Summarizing text
Self notes
- Maybe working backwards from expected result would work coming up with proper requirements