Skip to content

Prompt Engineering - Crafting High-Performance AI Instructions ​

Master the art and science of creating effective prompts that consistently generate high-quality, reliable responses from language models

🎯 Understanding Prompt Engineering ​

Prompt engineering is the practice of designing inputs to elicit the most useful and accurate responses from language models. It combines creativity with systematic testing to maximize AI performance across different tasks and domains.

🧠 The Psychology of Prompts ​

text
                    🧠 HOW LANGUAGE MODELS PROCESS PROMPTS 🧠
                       (Understanding the AI mindset)

    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚                    PROMPT PROCESSING FLOW                       β”‚
    β”‚                                                                β”‚
    β”‚  Input Text β†’ Tokenization β†’ Context Building β†’ Pattern        β”‚
    β”‚               Recognition β†’ Response Generation β†’ Output        β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚           KEY INFLUENCERS               β”‚
    β”‚                                        β”‚
    β”‚  🎯 Clarity: Clear, specific instructionsβ”‚
    β”‚  πŸ“ Context: Relevant background info   β”‚
    β”‚  πŸ” Examples: Demonstrating desired outputβ”‚
    β”‚  🎭 Persona: Role and expertise level   β”‚
    β”‚  πŸ“Š Format: Structure and constraints   β”‚
    β”‚  πŸ”„ Iteration: Testing and refinement  β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ—οΈ Fundamental Prompt Components ​

πŸ“ The Anatomy of Effective Prompts ​

python
from langchain_core.prompts import PromptTemplate

# SIPNF Framework: Situation, Instruction, Persona, kNowledge, Format
effective_prompt = PromptTemplate.from_template("""
SITUATION: You are helping a {user_type} understand {topic}.

INSTRUCTION: Explain {concept} in a way that is {difficulty_level} and practical.

PERSONA: You are {role} with {experience_level} experience in {domain}.

KNOWLEDGE: Consider these key points: {key_points}

FORMAT: Structure your response as:
1. Brief overview
2. Key concepts (3-5 points)
3. Practical example
4. Common pitfalls to avoid
5. Next steps for learning

{specific_question}
""")

# Example usage
formatted_prompt = effective_prompt.format(
    user_type="software developer new to AI",
    topic="machine learning",
    concept="neural networks",
    difficulty_level="intermediate",
    role="an experienced ML engineer",
    experience_level="10+ years",
    domain="deep learning and AI systems",
    key_points="backpropagation, gradient descent, activation functions",
    specific_question="How do neural networks actually learn from data?"
)

print(formatted_prompt)

🎯 The CLEAR Framework ​

python
class CLEARPromptBuilder:
    """
    CLEAR Framework for prompt engineering:
    C - Context (background information)
    L - Length (specify desired output length)
    E - Examples (provide sample inputs/outputs)
    A - Audience (define target audience)
    R - Role (assign AI persona/expertise)
    """
    
    def __init__(self):
        self.components = {}
    
    def context(self, background: str):
        self.components['context'] = f"Context: {background}"
        return self
    
    def length(self, length_spec: str):
        self.components['length'] = f"Length: {length_spec}"
        return self
    
    def examples(self, examples: list):
        example_text = "\n".join([f"Example {i+1}: {ex}" for i, ex in enumerate(examples)])
        self.components['examples'] = f"Examples:\n{example_text}"
        return self
    
    def audience(self, audience_desc: str):
        self.components['audience'] = f"Audience: {audience_desc}"
        return self
    
    def role(self, role_desc: str):
        self.components['role'] = f"Role: You are {role_desc}"
        return self
    
    def build(self, instruction: str):
        prompt_parts = [self.components.get('role', '')]
        prompt_parts.append(self.components.get('context', ''))
        prompt_parts.append(self.components.get('audience', ''))
        prompt_parts.append(self.components.get('length', ''))
        prompt_parts.append(self.components.get('examples', ''))
        prompt_parts.append(f"\nInstruction: {instruction}")
        
        return "\n\n".join([part for part in prompt_parts if part])

# Usage example
clear_prompt = (CLEARPromptBuilder()
    .role("an expert Python tutor with 15 years of teaching experience")
    .context("The student is learning about data structures and has basic Python knowledge")
    .audience("Beginner programmer who learns best with hands-on examples")
    .length("Provide a comprehensive explanation in 300-500 words")
    .examples([
        "Input: 'Explain lists' β†’ Output: Detailed explanation with code examples",
        "Input: 'Show dictionary usage' β†’ Output: Practical examples with explanations"
    ])
    .build("Explain Python dictionaries and when to use them instead of lists"))

print(clear_prompt)

🎨 Advanced Prompting Techniques ​

πŸ”— Chain-of-Thought Prompting ​

python
# Chain-of-thought prompting for complex reasoning
cot_template = PromptTemplate.from_template("""
You are an expert problem solver. For the following problem, think through it step by step.

Problem: {problem}

Please solve this by:
1. Breaking down the problem into smaller parts
2. Solving each part systematically
3. Explaining your reasoning at each step
4. Combining the solutions for the final answer

Let's work through this step by step:

Step 1: Understanding the problem
[Analyze what is being asked]

Step 2: Identifying key components
[List the important elements]

Step 3: Applying relevant knowledge
[Use appropriate methods/formulas]

Step 4: Calculating/reasoning
[Show your work]

Step 5: Verification
[Check if the answer makes sense]

Final Answer: [Your conclusion]
""")

# Example for a math problem
math_problem = cot_template.format(
    problem="A store sells apples for $0.50 each and oranges for $0.75 each. If John buys 12 apples and 8 oranges, and pays with a $20 bill, how much change does he get?"
)

# Example for a coding problem
coding_problem = cot_template.format(
    problem="Write a Python function that finds the second largest number in a list without using built-in sorting functions."
)

🎭 Few-Shot Learning Patterns ​

python
class FewShotPromptBuilder:
    def __init__(self, task_description: str):
        self.task_description = task_description
        self.examples = []
        self.instructions = []
    
    def add_example(self, input_text: str, output_text: str, explanation: str = ""):
        example = {
            "input": input_text,
            "output": output_text,
            "explanation": explanation
        }
        self.examples.append(example)
        return self
    
    def add_instruction(self, instruction: str):
        self.instructions.append(instruction)
        return self
    
    def build(self, new_input: str):
        prompt_parts = [self.task_description]
        
        if self.instructions:
            prompt_parts.append("\nInstructions:")
            for instruction in self.instructions:
                prompt_parts.append(f"- {instruction}")
        
        prompt_parts.append("\nExamples:")
        
        for i, example in enumerate(self.examples, 1):
            prompt_parts.append(f"\nExample {i}:")
            prompt_parts.append(f"Input: {example['input']}")
            prompt_parts.append(f"Output: {example['output']}")
            if example['explanation']:
                prompt_parts.append(f"Explanation: {example['explanation']}")
        
        prompt_parts.append(f"\nNow apply the same pattern:")
        prompt_parts.append(f"Input: {new_input}")
        prompt_parts.append("Output:")
        
        return "\n".join(prompt_parts)

# Example: Sentiment analysis with explanations
sentiment_builder = FewShotPromptBuilder(
    "Analyze the sentiment of the given text and provide a brief explanation."
)

sentiment_prompt = (sentiment_builder
    .add_instruction("Classify sentiment as Positive, Negative, or Neutral")
    .add_instruction("Provide a confidence score from 0-1")
    .add_instruction("Explain key words that influenced your decision")
    .add_example(
        "I love this new restaurant! The food is amazing.",
        "Positive (confidence: 0.95)",
        "Keywords 'love' and 'amazing' strongly indicate positive sentiment"
    )
    .add_example(
        "The weather is okay today.",
        "Neutral (confidence: 0.80)",
        "Word 'okay' suggests neutral sentiment without strong positive/negative indicators"
    )
    .add_example(
        "This product is terrible and broke after one day.",
        "Negative (confidence: 0.98)",
        "Keywords 'terrible' and 'broke' clearly indicate negative experience"
    )
    .build("The movie was decent but nothing special."))

print(sentiment_prompt)

πŸ”„ Self-Correction and Refinement ​

python
# Template for self-correcting responses
self_correction_template = PromptTemplate.from_template("""
You are an expert {domain} specialist. I need you to solve this problem with high accuracy.

Problem: {problem}

Please follow this process:

STEP 1: Initial Solution
Provide your first attempt at solving this problem.

STEP 2: Critical Review
Review your initial solution and identify potential issues:
- Are there any logical errors?
- Did you miss any important considerations?
- Are there edge cases not handled?
- Is the solution complete and clear?

STEP 3: Improved Solution
Based on your review, provide an improved solution that addresses any identified issues.

STEP 4: Final Verification
Double-check your improved solution:
- Does it fully address the original problem?
- Is it technically accurate?
- Would it work in real-world scenarios?

Please work through all four steps systematically.
""")

# Example usage
coding_refinement = self_correction_template.format(
    domain="software engineering",
    problem="Create a Python function that safely divides two numbers and handles edge cases"
)

analysis_refinement = self_correction_template.format(
    domain="data analysis",
    problem="Determine the most effective marketing channel from the given data: Email (1000 views, 50 clicks), Social Media (5000 views, 150 clicks), Search Ads (800 views, 120 clicks)"
)

🎯 Domain-Specific Prompting Strategies ​

πŸ’» Technical Documentation Prompts ​

python
# Technical documentation prompt templates
tech_doc_templates = {
    "api_documentation": PromptTemplate.from_template("""
You are a senior technical writer specializing in API documentation.

Create comprehensive documentation for the following API endpoint:

Endpoint: {endpoint}
Method: {method}
Purpose: {purpose}

Please include:

## Overview
Brief description of what this endpoint does and when to use it.

## Request Format
- URL structure
- Required parameters
- Optional parameters
- Request body format (if applicable)
- Authentication requirements

## Response Format
- Success response structure
- Error response structure
- Status codes and their meanings

## Code Examples
Provide working examples in:
- cURL
- Python (requests library)
- JavaScript (fetch API)

## Error Handling
Common errors and how to handle them.

## Rate Limiting
Any rate limiting considerations.

Make the documentation clear, complete, and developer-friendly.
"""),

    "code_review": PromptTemplate.from_template("""
You are a senior software engineer conducting a thorough code review.

Code to review:
```{language}
{code}

Context:

Please provide a comprehensive review covering:

Code Quality Assessment ​

  • Readability and maintainability
  • Code organization and structure
  • Naming conventions
  • Comments and documentation

Technical Analysis ​

  • Algorithm efficiency and performance
  • Memory usage considerations
  • Error handling and edge cases
  • Security considerations

Best Practices ​

  • Language-specific best practices
  • Design patterns usage
  • Testing considerations

Specific Recommendations ​

  • Concrete improvements with code examples
  • Priority level for each suggestion
  • Reasoning behind each recommendation

Overall Rating ​

Rate the code from 1-10 and provide summary feedback.

Be constructive, specific, and provide actionable feedback. """),

"debugging_assistant": PromptTemplate.from_template("""

You are an expert debugging specialist with deep knowledge of {technology}.

Problem Description:

Code with Issue:

{code}

Error Message (if any):

Expected Behavior:

Please help debug this issue by:

Problem Analysis ​

  1. Identify the root cause of the issue
  2. Explain why this problem occurs
  3. List all contributing factors

Solution Steps ​

  1. Immediate fix for the current issue
  2. Code corrections with explanations
  3. Testing approach to verify the fix

Prevention Strategies ​

  1. How to avoid this issue in the future
  2. Best practices to implement
  3. Tools or techniques that could help

Alternative Approaches ​

If applicable, suggest different ways to implement the same functionality.

Provide working, tested code solutions. """) }

Example usage ​

python
api_docs = tech_doc_templates["api_documentation"].format(
    endpoint="/api/v1/users/{user_id}/preferences",
    method="PATCH",
    purpose="Update user notification preferences"
)

πŸ“Š Data Analysis Prompts ​

python
# Data analysis and insights prompt templates
data_analysis_templates = {
    "dataset_explorer": PromptTemplate.from_template("""
You are a senior data analyst with expertise in {domain} data analysis.

Dataset Description: {dataset_description}
Analysis Goal: {analysis_goal}
Key Metrics: {key_metrics}

Please provide a comprehensive analysis plan:

## Data Understanding
1. What does this dataset represent?
2. What are the key variables and their relationships?
3. What time period or scope does it cover?

## Exploratory Data Analysis (EDA) Plan
1. Initial data quality checks
2. Distribution analysis for key variables
3. Correlation analysis
4. Outlier detection strategy
5. Missing data assessment

## Statistical Analysis Approach
1. Appropriate statistical methods for the goals
2. Hypothesis formulation (if applicable)
3. Significance testing strategy
4. Confidence intervals and effect sizes

## Visualization Strategy
1. Key charts and graphs to create
2. Dashboard design considerations
3. Storytelling through data visualization

## Insights and Recommendations Framework
1. Business implications of findings
2. Actionable recommendations
3. Limitations and caveats
4. Next steps for further analysis

Provide specific, actionable guidance with methodology explanations.
"""),

    "insight_generator": PromptTemplate.from_template("""
You are a business intelligence expert analyzing data for strategic decision-making.

Data Summary: {data_summary}
Business Context: {business_context}
Stakeholder Questions: {stakeholder_questions}

Generate insights following this structure:

## Executive Summary
- Top 3 key findings
- Business impact summary
- Critical action items

## Detailed Findings

### Finding 1: [Title]
- **What:** Clear statement of the finding
- **Why:** Root cause analysis
- **Impact:** Business implications
- **Confidence:** How certain are you about this finding?

### Finding 2: [Title]
[Same structure as above]

### Finding 3: [Title]
[Same structure as above]

## Data Quality Assessment
- Reliability of the conclusions
- Limitations and assumptions
- Confidence intervals where applicable

## Recommendations
1. **Immediate Actions** (next 30 days)
2. **Short-term Initiatives** (1-3 months)
3. **Long-term Strategy** (3+ months)

## Next Steps
- Additional data needed
- Follow-up analysis recommendations
- Monitoring and tracking suggestions

Focus on actionable insights that drive business value.
""")
}

# Example usage
dataset_analysis = data_analysis_templates["dataset_explorer"].format(
    domain="e-commerce",
    dataset_description="Customer purchase history over 2 years with demographics",
    analysis_goal="Identify factors that drive customer retention and lifetime value",
    key_metrics="Retention rate, CLV, purchase frequency, average order value"
)

πŸŽ“ Educational Content Prompts ​

python
# Educational content creation templates
education_templates = {
    "lesson_planner": PromptTemplate.from_template("""
You are an experienced educator and curriculum designer specializing in {subject}.

Topic: {topic}
Student Level: {student_level}
Learning Objectives: {learning_objectives}
Time Available: {duration}

Create a comprehensive lesson plan:

## Learning Outcomes
By the end of this lesson, students will be able to:
1. [Specific, measurable outcomes]
2. [Build on previous knowledge]
3. [Apply new concepts]

## Prerequisite Knowledge
- What students should know before this lesson
- Quick assessment questions to check readiness

## Lesson Structure

### Introduction ({intro_time} minutes)
- Hook/engagement activity
- Connection to prior learning
- Overview of what will be covered

### Main Content ({main_time} minutes)
- Key concepts breakdown
- Interactive demonstrations
- Guided practice activities
- Checking for understanding

### Application ({application_time} minutes)
- Hands-on practice
- Problem-solving activities
- Peer collaboration opportunities

### Wrap-up ({wrap_time} minutes)
- Summary of key points
- Quick assessment
- Preview of next lesson

## Assessment Strategies
- Formative assessment throughout
- Summative assessment options
- Success criteria for each objective

## Differentiation
- Support for struggling learners
- Extensions for advanced students
- Multiple learning style accommodations

## Resources Needed
- Materials and equipment
- Technology requirements
- Preparation checklist

Make it engaging, practical, and pedagogically sound.
"""),

    "concept_explainer": PromptTemplate.from_template("""
You are a master teacher known for making complex concepts accessible and memorable.

Concept to Explain: {concept}
Target Audience: {audience}
Complexity Level: {complexity_level}
Learning Style: {learning_style}

Create a comprehensive explanation using the following structure:

## The Big Picture
Start with why this concept matters and how it fits into the larger context.

## Simple Definition
Explain the concept in one clear, jargon-free sentence.

## Breaking It Down
Divide the concept into 3-5 digestible components:

### Component 1: [Name]
- What it is
- Why it matters
- Simple example

### Component 2: [Name]
[Same structure]

## Real-World Connections
Provide 2-3 practical examples that the audience can relate to.

## Common Misconceptions
Address typical misunderstandings and clarify them.

## Memory Aids
- Analogies or metaphors
- Mnemonics or patterns
- Visual descriptions

## Practice Applications
Simple exercises or questions to reinforce understanding.

## Next Steps
What to learn next and how this concept builds forward.

Use language appropriate for the target audience and incorporate active learning elements.
""")
}

# Example usage
python_lesson = education_templates["lesson_planner"].format(
    subject="Python programming",
    topic="Object-Oriented Programming fundamentals",
    student_level="Intermediate beginners with basic Python knowledge",
    learning_objectives="Understand classes, objects, inheritance, and encapsulation",
    duration="90 minutes",
    intro_time="10",
    main_time="50",
    application_time="25",
    wrap_time="5"
)

πŸ”§ Prompt Optimization and Testing ​

πŸ“Š A/B Testing Framework for Prompts ​

python
import random
from typing import Dict, List, Any
from dataclasses import dataclass
from datetime import datetime

@dataclass
class PromptTest:
    name: str
    prompt_template: PromptTemplate
    description: str
    created_at: datetime = datetime.now()

@dataclass
class TestResult:
    prompt_name: str
    input_data: Dict[str, Any]
    output: str
    rating: float
    response_time: float
    timestamp: datetime = datetime.now()

class PromptABTester:
    def __init__(self):
        self.tests: Dict[str, List[PromptTest]] = {}
        self.results: List[TestResult] = []
    
    def create_test(self, test_group: str, prompt_tests: List[PromptTest]):
        """Create a new A/B test group with multiple prompt variants"""
        self.tests[test_group] = prompt_tests
    
    def run_test(self, test_group: str, input_data: Dict[str, Any], 
                 user_id: str = None) -> tuple[str, PromptTemplate]:
        """Run a test and return the selected prompt"""
        if test_group not in self.tests:
            raise ValueError(f"Test group '{test_group}' not found")
        
        tests = self.tests[test_group]
        
        if user_id:
            # Consistent selection for same user
            selected = tests[hash(user_id) % len(tests)]
        else:
            # Random selection
            selected = random.choice(tests)
        
        formatted_prompt = selected.prompt_template.format(**input_data)
        return selected.name, formatted_prompt
    
    def record_result(self, prompt_name: str, input_data: Dict[str, Any], 
                     output: str, rating: float, response_time: float):
        """Record test results for analysis"""
        result = TestResult(
            prompt_name=prompt_name,
            input_data=input_data,
            output=output,
            rating=rating,
            response_time=response_time
        )
        self.results.append(result)
    
    def analyze_results(self, test_group: str) -> Dict[str, Any]:
        """Analyze results for a test group"""
        group_prompts = {test.name for test in self.tests[test_group]}
        group_results = [r for r in self.results if r.prompt_name in group_prompts]
        
        analysis = {}
        for prompt_name in group_prompts:
            prompt_results = [r for r in group_results if r.prompt_name == prompt_name]
            
            if prompt_results:
                ratings = [r.rating for r in prompt_results]
                times = [r.response_time for r in prompt_results]
                
                analysis[prompt_name] = {
                    'total_tests': len(prompt_results),
                    'avg_rating': sum(ratings) / len(ratings),
                    'avg_response_time': sum(times) / len(times),
                    'min_rating': min(ratings),
                    'max_rating': max(ratings),
                    'rating_std': self._calculate_std(ratings)
                }
        
        return analysis
    
    def _calculate_std(self, values: List[float]) -> float:
        """Calculate standard deviation"""
        if len(values) < 2:
            return 0.0
        mean = sum(values) / len(values)
        variance = sum((x - mean) ** 2 for x in values) / (len(values) - 1)
        return variance ** 0.5

# Example A/B test setup
tester = PromptABTester()

# Create different prompt variants for testing
prompt_a = PromptTest(
    name="formal_tutor",
    prompt_template=PromptTemplate.from_template(
        "You are a professional programming instructor. Explain {topic} to a {level} student with clear examples and proper terminology."
    ),
    description="Formal, professional teaching style"
)

prompt_b = PromptTest(
    name="friendly_tutor", 
    prompt_template=PromptTemplate.from_template(
        "Hey there! I'm your coding buddy 😊 Let me help you understand {topic}. Since you're at {level} level, I'll make this super clear and fun!"
    ),
    description="Friendly, casual teaching style"
)

prompt_c = PromptTest(
    name="socratic_tutor",
    prompt_template=PromptTemplate.from_template(
        "I'm here to guide your learning about {topic}. As a {level} learner, let's explore this together. I'll ask questions to help you discover the concepts yourself."
    ),
    description="Socratic method teaching style"
)

# Set up the test
tester.create_test("tutoring_style", [prompt_a, prompt_b, prompt_c])

# Run tests
test_input = {"topic": "Python functions", "level": "beginner"}
selected_prompt, formatted = tester.run_test("tutoring_style", test_input, user_id="user123")

# Simulate recording results
tester.record_result(selected_prompt, test_input, "Generated response here", 4.5, 1.2)

# Analyze results
results = tester.analyze_results("tutoring_style")
print(f"Test Results: {results}")

🎯 Prompt Performance Metrics ​

python
class PromptMetrics:
    """Track and analyze prompt performance across multiple dimensions"""
    
    def __init__(self):
        self.metrics = {
            'accuracy': [],
            'relevance': [],
            'clarity': [],
            'completeness': [],
            'response_time': [],
            'token_usage': []
        }
    
    def record_evaluation(self, accuracy: float, relevance: float, 
                         clarity: float, completeness: float,
                         response_time: float, token_usage: int):
        """Record evaluation metrics for a prompt response"""
        self.metrics['accuracy'].append(accuracy)
        self.metrics['relevance'].append(relevance)
        self.metrics['clarity'].append(clarity)
        self.metrics['completeness'].append(completeness)
        self.metrics['response_time'].append(response_time)
        self.metrics['token_usage'].append(token_usage)
    
    def get_summary(self) -> Dict[str, float]:
        """Get summary statistics for all metrics"""
        summary = {}
        for metric_name, values in self.metrics.items():
            if values:
                summary[metric_name] = {
                    'mean': sum(values) / len(values),
                    'min': min(values),
                    'max': max(values),
                    'count': len(values)
                }
        return summary
    
    def get_composite_score(self, weights: Dict[str, float] = None) -> float:
        """Calculate weighted composite performance score"""
        if weights is None:
            weights = {
                'accuracy': 0.3,
                'relevance': 0.25,
                'clarity': 0.2,
                'completeness': 0.15,
                'response_time': -0.05,  # Negative because faster is better
                'token_usage': -0.05     # Negative because fewer tokens is better
            }
        
        normalized_scores = {}
        for metric, values in self.metrics.items():
            if values and metric in weights:
                if metric in ['response_time', 'token_usage']:
                    # For metrics where lower is better, invert the scoring
                    normalized_scores[metric] = 1 - (sum(values) / len(values)) / max(values)
                else:
                    # For metrics where higher is better
                    normalized_scores[metric] = sum(values) / len(values)
        
        composite = sum(score * weights.get(metric, 0) 
                       for metric, score in normalized_scores.items())
        return max(0, min(1, composite))  # Clamp between 0 and 1

# Example usage
metrics = PromptMetrics()

# Record several evaluations
metrics.record_evaluation(
    accuracy=0.85, relevance=0.90, clarity=0.80, 
    completeness=0.75, response_time=1.5, token_usage=150
)
metrics.record_evaluation(
    accuracy=0.92, relevance=0.88, clarity=0.95, 
    completeness=0.85, response_time=1.2, token_usage=120
)

summary = metrics.get_summary()
composite_score = metrics.get_composite_score()

print(f"Performance Summary: {summary}")
print(f"Composite Score: {composite_score:.3f}")

πŸ”— Integration with Advanced LangChain Features ​

πŸ”„ Dynamic Prompt Selection ​

python
from langchain_core.runnables import RunnableLambda

class DynamicPromptSelector:
    """Select prompts based on input characteristics and context"""
    
    def __init__(self):
        self.prompt_library = {}
        self.selection_rules = {}
    
    def register_prompt(self, name: str, prompt_template: PromptTemplate, 
                       conditions: Dict[str, Any]):
        """Register a prompt with selection conditions"""
        self.prompt_library[name] = {
            'template': prompt_template,
            'conditions': conditions
        }
    
    def add_selection_rule(self, rule_name: str, condition_func):
        """Add a custom selection rule function"""
        self.selection_rules[rule_name] = condition_func
    
    def select_prompt(self, input_data: Dict[str, Any]) -> PromptTemplate:
        """Select the best prompt based on input characteristics"""
        best_match = None
        best_score = -1
        
        for name, prompt_info in self.prompt_library.items():
            score = self._calculate_match_score(input_data, prompt_info['conditions'])
            if score > best_score:
                best_score = score
                best_match = prompt_info['template']
        
        return best_match or list(self.prompt_library.values())[0]['template']
    
    def _calculate_match_score(self, input_data: Dict[str, Any], 
                              conditions: Dict[str, Any]) -> float:
        """Calculate how well input matches prompt conditions"""
        score = 0
        total_conditions = len(conditions)
        
        for condition_key, condition_value in conditions.items():
            if condition_key in input_data:
                if isinstance(condition_value, list):
                    if input_data[condition_key] in condition_value:
                        score += 1
                elif input_data[condition_key] == condition_value:
                    score += 1
        
        return score / total_conditions if total_conditions > 0 else 0

# Setup dynamic prompt selection
selector = DynamicPromptSelector()

# Register different prompts for different scenarios
selector.register_prompt(
    "beginner_coding",
    PromptTemplate.from_template("""
    You are a patient coding teacher. Explain {topic} to someone just starting out.
    Use simple language, provide step-by-step instructions, and include basic examples.
    
    Topic: {topic}
    Question: {question}
    """),
    conditions={"skill_level": "beginner", "domain": "programming"}
)

selector.register_prompt(
    "advanced_technical",
    PromptTemplate.from_template("""
    You are a senior technical expert. Provide an advanced analysis of {topic}.
    Include technical details, performance considerations, and best practices.
    
    Topic: {topic}
    Question: {question}
    """),
    conditions={"skill_level": ["advanced", "expert"], "domain": "programming"}
)

selector.register_prompt(
    "business_analysis",
    PromptTemplate.from_template("""
    You are a business analyst. Explain {topic} in terms of business value and impact.
    Focus on ROI, strategic implications, and actionable insights.
    
    Topic: {topic}
    Question: {question}
    """),
    conditions={"domain": "business", "output_type": "analysis"}
)

# Use dynamic selection in a chain
def select_and_format_prompt(input_dict):
    selected_prompt = selector.select_prompt(input_dict)
    return selected_prompt.format(**input_dict)

dynamic_chain = RunnableLambda(select_and_format_prompt)

# Example usage
result1 = dynamic_chain.invoke({
    "skill_level": "beginner",
    "domain": "programming", 
    "topic": "Python functions",
    "question": "How do I create my first function?"
})

result2 = dynamic_chain.invoke({
    "skill_level": "expert",
    "domain": "programming",
    "topic": "Python functions", 
    "question": "What are the performance implications of different function designs?"
})

🎯 Best Practices and Guidelines ​

βœ… Prompt Engineering Principles ​

  1. Specificity Over Generality

    python
    # Good: Specific and actionable
    specific_prompt = """
    You are a Python code reviewer for a financial services company.
    Review this function for security vulnerabilities, performance issues, 
    and compliance with PEP 8 standards. Provide specific recommendations.
    """
    
    # Avoid: Too general
    general_prompt = "Review this code and make it better."
  2. Clear Output Format Specification

    python
    # Good: Clear format requirements
    structured_output = """
    Analyze the following data and respond in this exact format:
    
    SUMMARY: [One sentence overview]
    KEY_INSIGHTS: [Bullet points of 3-5 insights]
    CONFIDENCE_LEVEL: [High/Medium/Low]
    NEXT_ACTIONS: [Numbered list of recommendations]
    """
  3. Progressive Disclosure of Information

    python
    # Good: Builds context progressively
    progressive_prompt = """
    Context: You are analyzing customer feedback for an e-commerce platform.
    
    Background: The company has seen declining satisfaction scores over 6 months.
    
    Data: [Provide data here]
    
    Your task: Identify the top 3 factors driving dissatisfaction and suggest solutions.
    
    Requirements: Base conclusions on data evidence and consider implementation feasibility.
    """
  4. Error Prevention and Handling

    python
    robust_prompt = """
    {base_instruction}
    
    Important guidelines:
    - If the data is insufficient, clearly state what additional information is needed
    - If multiple interpretations are possible, present the most likely scenarios
    - If you're uncertain about any aspect, explicitly mention your confidence level
    - If the request is outside your expertise, recommend appropriate specialists
    
    Input: {user_input}
    """

πŸ” Testing and Validation Strategies ​

python
class PromptValidator:
    """Comprehensive prompt testing and validation framework"""
    
    def __init__(self):
        self.test_cases = []
        self.validation_rules = []
    
    def add_test_case(self, name: str, inputs: Dict[str, Any], 
                     expected_outputs: Dict[str, Any]):
        """Add a test case for prompt validation"""
        self.test_cases.append({
            'name': name,
            'inputs': inputs,
            'expected': expected_outputs
        })
    
    def add_validation_rule(self, name: str, validation_func):
        """Add a validation rule function"""
        self.validation_rules.append({
            'name': name,
            'func': validation_func
        })
    
    def validate_prompt(self, prompt_template: PromptTemplate, 
                       model_function) -> Dict[str, Any]:
        """Run comprehensive validation on a prompt"""
        results = {
            'test_results': [],
            'validation_results': [],
            'overall_score': 0
        }
        
        # Run test cases
        for test_case in self.test_cases:
            try:
                formatted_prompt = prompt_template.format(**test_case['inputs'])
                output = model_function(formatted_prompt)
                
                test_result = {
                    'name': test_case['name'],
                    'status': 'passed',
                    'output': output,
                    'expected': test_case['expected']
                }
                
                # Check if output meets expectations
                meets_expectations = self._check_expectations(
                    output, test_case['expected']
                )
                if not meets_expectations:
                    test_result['status'] = 'failed'
                
                results['test_results'].append(test_result)
                
            except Exception as e:
                results['test_results'].append({
                    'name': test_case['name'],
                    'status': 'error',
                    'error': str(e)
                })
        
        # Run validation rules
        for rule in self.validation_rules:
            try:
                rule_result = rule['func'](prompt_template, results['test_results'])
                results['validation_results'].append({
                    'rule': rule['name'],
                    'result': rule_result
                })
            except Exception as e:
                results['validation_results'].append({
                    'rule': rule['name'],
                    'error': str(e)
                })
        
        # Calculate overall score
        passed_tests = sum(1 for test in results['test_results'] 
                          if test.get('status') == 'passed')
        total_tests = len(results['test_results'])
        results['overall_score'] = passed_tests / total_tests if total_tests > 0 else 0
        
        return results
    
    def _check_expectations(self, output: str, expected: Dict[str, Any]) -> bool:
        """Check if output meets expected criteria"""
        for criterion, value in expected.items():
            if criterion == 'contains':
                if not all(phrase in output for phrase in value):
                    return False
            elif criterion == 'length_min':
                if len(output) < value:
                    return False
            elif criterion == 'length_max':
                if len(output) > value:
                    return False
            elif criterion == 'format':
                # Add format validation logic here
                pass
        
        return True

# Example validation setup
validator = PromptValidator()

# Add test cases
validator.add_test_case(
    "basic_explanation",
    inputs={"topic": "Python lists", "level": "beginner"},
    expected={
        "contains": ["list", "example", "index"],
        "length_min": 100,
        "length_max": 500
    }
)

validator.add_test_case(
    "advanced_analysis",
    inputs={"topic": "algorithm complexity", "level": "advanced"},
    expected={
        "contains": ["Big O", "time complexity", "space complexity"],
        "length_min": 200,
        "length_max": 800
    }
)

# Add validation rules
def check_clarity(prompt_template, test_results):
    """Check if responses are clear and well-structured"""
    clarity_indicators = ["first", "second", "then", "finally", "example"]
    
    clear_responses = 0
    for result in test_results:
        if result.get('status') == 'passed':
            output = result.get('output', '')
            if any(indicator in output.lower() for indicator in clarity_indicators):
                clear_responses += 1
    
    return clear_responses / len(test_results) if test_results else 0

validator.add_validation_rule("clarity_check", check_clarity)

# Mock model function for testing
def mock_model(prompt):
    return f"Generated response based on: {prompt[:50]}..."

# Run validation
validation_results = validator.validate_prompt(
    PromptTemplate.from_template("Explain {topic} to a {level} student"),
    mock_model
)

print(f"Validation Results: {validation_results}")

πŸ”— Next Steps ​

Ready to put your prompts into action? Continue with:


Key Prompt Engineering Takeaways:

  • Systematic approach beats trial-and-error for consistent results
  • Testing and validation are essential for production prompts
  • Context and specificity dramatically improve output quality
  • Performance metrics help optimize prompts over time
  • Domain expertise in prompting becomes increasingly valuable

Released under the MIT License.