Skip to content

Model Input - LangChain Input Processing ​

Master prompt engineering and input optimization for effective LLM communication

πŸ“₯ What is Model Input? ​

Model Input in LangChain refers to how we structure and format prompts for language models. Instead of sending raw text, LangChain provides powerful tools to create structured, reusable, and effective prompts that get better results from LLMs.

Key Focus: Transform raw user requests into well-structured prompts using LangChain's message types and templates.

🎯 LangChain Message Types ​

LangChain evolved from simple string messages to structured message objects that better represent conversation flows.

Evolution: From Strings to Structured Messages ​

Before (Raw OpenAI): Messages were dictionaries with roles

python
messages = [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": "What is Python?"},
    {"role": "assistant", "content": "Python is a programming language"}
]

Now (LangChain): Structured message objects with clear types

python
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage

messages = [
    SystemMessage(content="You are a helpful assistant"),
    HumanMessage(content="What is Python?"),
    AIMessage(content="Python is a programming language")
]

πŸ“ System Messages ​

System messages define the AI's behavior and personality.

python
%load_ext dotenv
%dotenv
from langchain_openai.chat_models import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage

chat = ChatOpenAI(
    model_name='gpt-4', 
    model_kwargs={'seed': 365}, 
    temperature=0, 
    max_tokens=100
)

# Define the AI's personality
system_message = SystemMessage(content="""
You are Marv, a chatbot that reluctantly answers questions with sarcastic responses.
""")

human_message = HumanMessage(content="""
I've recently adopted a dog. Can you suggest some dog names?
""")

response = chat.invoke([system_message, human_message])
print(response.content)

πŸ‘€ Human Messages ​

Human messages represent user input in the conversation.

python
# Simple human message
human_msg = HumanMessage(content="What's the capital of France?")

# Multi-part human message
complex_human_msg = HumanMessage(content="""
I'm working on a machine learning project and need help with:
1. Data preprocessing
2. Model selection
3. Evaluation metrics

My dataset has 10,000 samples with 50 features.
""")

πŸ€– AI Messages ​

AI messages represent previous assistant responses, crucial for maintaining conversation context.

python
from langchain_core.messages import HumanMessage, AIMessage

# Build conversation history
conversation = [
    HumanMessage(content="I've recently adopted a dog. Can you suggest some dog names?"),
    AIMessage(content="""Oh, absolutely. Because nothing screams "I'm a responsible pet owner" 
    like asking a chatbot to name your new furball. How about "Bark Twain" (if it's a literary hound)?"""),
    
    HumanMessage(content="I've recently adopted a cat. Can you suggest some cat names?"),
    AIMessage(content="""Oh, absolutely. Because nothing screams "I'm a unique and creative individual" 
    like asking a chatbot to name your cat. How about "Furry McFurFace", "Sir Meowsalot", or "Catastrophe"?"""),
    
    # New question with context
    HumanMessage(content="I've recently adopted a fish. Can you suggest some fish names?")
]

response = chat.invoke(conversation)
print(response.content)

🎯 Few-Shot Prompting ​

Few-shot prompting is a technique where you provide examples of desired input-output behavior to guide the model's responses.

What is Few-Shot Prompting? ​

Few-shot prompting involves giving the model a small number of examples (usually 1–5) within the prompt itself, helping it learn the pattern for new, similar queries.

Structure of Few-Shot Prompting ​

text
Example 1:
Q: What is the capital of France?
A: Paris

Example 2:
Q: What is the capital of Japan?
A: Tokyo

Now you:
Q: What is the capital of Germany?
A: [Model completes with "Berlin"]

πŸ› οΈ LangChain Few-Shot Implementation ​

python
from langchain_core.prompts import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate, 
    AIMessagePromptTemplate, 
    FewShotChatMessagePromptTemplate
)

# Define the templates for examples
human_template = HumanMessagePromptTemplate.from_template(
    "I've recently adopted a {pet}. Could you suggest some {pet} names?"
)

ai_template = AIMessagePromptTemplate.from_template("{response}")

# Create the example template
example_template = ChatPromptTemplate.from_messages([
    human_template, 
    ai_template
])

# Define examples
examples = [
    {
        'pet': 'dog', 
        'response': '''Oh, absolutely. Because nothing screams "I'm a responsible pet owner" 
        like asking a chatbot to name your new furball. How about "Bark Twain" (if it's a literary hound)?'''
    }, 
    {
        'pet': 'cat', 
        'response': '''Oh, absolutely. Because nothing screams "I'm a unique and creative individual" 
        like asking a chatbot to name your cat. How about "Furry McFurFace", "Sir Meowsalot", or "Catastrophe"?'''
    }, 
    {
        'pet': 'fish', 
        'response': '''Oh, absolutely. Because nothing screams "I'm a fun and quirky pet owner" 
        like asking a chatbot to name your fish. How about "Fin Diesel", "Gill Gates", or "Bubbles"?'''
    }
]

# Create few-shot prompt
few_shot_prompt = FewShotChatMessagePromptTemplate(
    examples=examples, 
    example_prompt=example_template, 
    input_variables=['pet']
)

# Combine with new question
final_template = ChatPromptTemplate.from_messages([
    few_shot_prompt, 
    human_template
])

# Use the template
chat_prompt = final_template.invoke({'pet': 'rabbit'})

# See the generated messages
for message in chat_prompt.messages:
    print(f'{message.type}: {message.content}\n')

# Get response
response = chat.invoke(chat_prompt)
print("Final Response:", response.content)

οΏ½ Prompt Templates ​

LangChain provides powerful templating tools to create reusable, dynamic prompts.

Basic Prompt Templates ​

python
from langchain_core.prompts import PromptTemplate

# Simple template
template = PromptTemplate.from_template("""
System:
{description}

H:
I've recently adopted a {pet}.
Could you suggest some {pet} names?
""")

# Use the template
prompt_value = template.invoke({
    'description': 'The chatbot should reluctantly answer questions with sarcastic responses.',
    'pet': 'dog'
})

print(prompt_value.text)

Chat Prompt Templates ​

python
from langchain_core.prompts.chat import (
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
    ChatPromptTemplate
)

# Define templates
system_template = SystemMessagePromptTemplate.from_template("{description}")
human_template = HumanMessagePromptTemplate.from_template(
    "I've recently adopted a {pet}. Could you suggest some {pet} names?"
)

# Combine into chat template
chat_template = ChatPromptTemplate.from_messages([
    system_template, 
    human_template
])

# Use the template
chat_value = chat_template.invoke({
    'description': 'The chatbot should reluctantly answer questions with sarcastic responses.',
    'pet': 'dog'
})

# Send to LLM
response = chat.invoke(chat_value)
print(response.content)

οΏ½ Practical Input Patterns ​

1. Simple Chat Setup ​

python
from langchain_openai.chat_models import ChatOpenAI

# Initialize the chat model
chat = ChatOpenAI(
    model_name='gpt-4', 
    model_kwargs={'seed': 365}, 
    temperature=0, 
    max_tokens=100
)

# Simple prompt
response = chat.invoke("I've recently adopted a dog. Could you suggest some dog names?")
print(response.content)

2. Conversation with Context ​

python
from langchain_core.messages import SystemMessage, HumanMessage

# Create conversation with context
messages = [
    SystemMessage(content="You are a helpful pet naming assistant."),
    HumanMessage(content="I've adopted a playful golden retriever. Suggest names.")
]

response = chat.invoke(messages)
print(response.content)

3. Template-Based Approach ​

python
# Create reusable template
pet_naming_template = ChatPromptTemplate.from_messages([
    ("system", "You are a {personality} pet naming assistant."),
    ("human", "I've adopted a {pet_description}. Suggest {count} names.")
])

# Use template
prompt = pet_naming_template.invoke({
    "personality": "creative and fun",
    "pet_description": "small, energetic terrier",
    "count": "5"
})

response = chat.invoke(prompt)
print(response.content)

οΏ½ Getting Started Guide ​

Step 1: Install Dependencies ​

bash
pip install langchain langchain-openai python-dotenv

Step 2: Set Up Environment ​

python
%load_ext dotenv
%dotenv

# Your .env file should contain:
# OPENAI_API_KEY=your_api_key_here

Step 3: Basic Implementation ​

python
from langchain_openai.chat_models import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import SystemMessage, HumanMessage

# Initialize model
chat = ChatOpenAI(model_name='gpt-4', temperature=0)

# Method 1: Direct messages
messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="Explain machine learning in simple terms.")
]
response = chat.invoke(messages)

# Method 2: Using templates
template = ChatPromptTemplate.from_messages([
    ("system", "You are a {role} assistant."),
    ("human", "Explain {topic} in {style} terms.")
])

prompt = template.invoke({
    "role": "helpful",
    "topic": "machine learning", 
    "style": "simple"
})
response = chat.invoke(prompt)

🎯 Best Practices ​

βœ… Do's ​

  1. Use SystemMessage for behavior: Define personality and constraints
  2. Structure conversations: Use HumanMessage and AIMessage for context
  3. Template reusable prompts: Create templates for common patterns
  4. Provide examples: Use few-shot prompting for complex tasks
  5. Set parameters: Use temperature and max_tokens appropriately

❌ Don'ts ​

  1. Don't ignore context: Always consider conversation history
  2. Don't over-complicate: Start simple and add complexity as needed
  3. Don't skip validation: Check inputs before processing
  4. Don't forget error handling: Handle API failures gracefully

πŸ“– Key Takeaways ​

  • LangChain evolved from simple strings to structured message objects
  • Message types (System, Human, AI) provide clear conversation structure
  • Few-shot prompting teaches models through examples
  • Templates make prompts reusable and maintainable
  • Practical implementation focuses on real-world patterns

Focus on practical implementation rather than theoretical complexity. Start with simple patterns and build up to more sophisticated approaches as needed.

Released under the MIT License.