Skip to content

Installation & Setup - Getting Started with LangChain Development ​

Complete guide to installing LangChain, setting up your development environment, and configuring integrations for production-ready AI applications

🎯 Quick Start Installation ​

πŸ“¦ Core Installation ​

bash
# Install LangChain core package
pip install langchain

# Install with common extras
pip install "langchain[all]"  # All integrations (large install)
pip install "langchain[llms]"  # Just LLM providers
bash
# Create virtual environment
python -m venv langchain_env
source langchain_env/bin/activate  # On Windows: langchain_env\Scripts\activate

# Install core packages
pip install langchain
pip install langchain-openai      # OpenAI integration
pip install langchain-community   # Community integrations
pip install langchain-experimental # Experimental features

# Development tools
pip install jupyter              # For notebooks
pip install python-dotenv       # Environment variables
pip install langsmith           # Monitoring and debugging

πŸ”§ Provider-Specific Installations ​

πŸ€– Language Model Providers ​

bash
pip install langchain-openai
python
# Setup
import os
from langchain_openai import ChatOpenAI

# Set API key (recommended: use environment variables)
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# Create model instance
llm = ChatOpenAI(
    model="gpt-3.5-turbo",
    temperature=0.7,
    api_key=os.getenv("OPENAI_API_KEY")
)

Anthropic (Claude) ​

bash
pip install langchain-anthropic
python
# Setup
from langchain_anthropic import ChatAnthropic

os.environ["ANTHROPIC_API_KEY"] = "your-api-key-here"

llm = ChatAnthropic(
    model="claude-3-sonnet-20240229",
    temperature=0.7
)

Google (Gemini) ​

bash
pip install langchain-google-genai
python
# Setup
from langchain_google_genai import ChatGoogleGenerativeAI

os.environ["GOOGLE_API_KEY"] = "your-api-key-here"

llm = ChatGoogleGenerativeAI(
    model="gemini-pro",
    temperature=0.7
)

Local Models (Ollama) ​

bash
# Install Ollama first: https://ollama.ai
# Then pull a model
ollama pull llama2

# Install LangChain integration
pip install langchain-community
python
# Setup
from langchain_community.llms import Ollama

# No API key needed for local models
llm = Ollama(model="llama2")

Hugging Face ​

bash
pip install langchain-huggingface
pip install transformers torch
python
# Setup
from langchain_huggingface import HuggingFacePipeline

# Local model
llm = HuggingFacePipeline.from_model_id(
    model_id="microsoft/DialoGPT-medium",
    task="text-generation",
    model_kwargs={"temperature": 0.7}
)

πŸ” Vector Stores & Embeddings ​

bash
pip install langchain-chroma
python
# Setup
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()
vectorstore = Chroma(
    persist_directory="./chroma_db",
    embedding_function=embeddings
)

Pinecone (Production vector DB) ​

bash
pip install langchain-pinecone
pip install pinecone-client
python
# Setup
from langchain_pinecone import PineconeVectorStore
from pinecone import Pinecone

os.environ["PINECONE_API_KEY"] = "your-api-key"

pc = Pinecone(api_key=os.getenv("PINECONE_API_KEY"))
vectorstore = PineconeVectorStore(
    index_name="your-index-name",
    embedding=embeddings
)
bash
pip install langchain-community
pip install faiss-cpu  # or faiss-gpu for GPU support
python
# Setup
from langchain_community.vectorstores import FAISS

vectorstore = FAISS.from_documents(
    documents=docs,
    embedding=embeddings
)

πŸ“š Document Loaders ​

bash
# PDF documents
pip install pypdf

# Web scraping
pip install beautifulsoup4 requests

# Office documents
pip install python-docx python-pptx

# Database connections
pip install sqlalchemy psycopg2-binary  # PostgreSQL
pip install pymongo  # MongoDB

# APIs and web
pip install requests aiohttp

🌍 Environment Configuration ​

πŸ“ Environment Variables (.env file) ​

Create a .env file in your project root:

bash
# .env file
# OpenAI
OPENAI_API_KEY=sk-your-openai-key-here

# Anthropic
ANTHROPIC_API_KEY=your-anthropic-key-here

# Google
GOOGLE_API_KEY=your-google-key-here

# Pinecone
PINECONE_API_KEY=your-pinecone-key-here
PINECONE_ENVIRONMENT=your-pinecone-environment

# LangSmith (monitoring)
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your-langsmith-key-here
LANGCHAIN_PROJECT=your-project-name

# Custom settings
LANGCHAIN_VERBOSE=true
TEMPERATURE=0.7
MAX_TOKENS=1000

🐍 Loading Environment Variables ​

python
# Method 1: Using python-dotenv (recommended)
from dotenv import load_dotenv
import os

load_dotenv()  # Load .env file

api_key = os.getenv("OPENAI_API_KEY")

# Method 2: Direct environment access
import os

api_key = os.environ.get("OPENAI_API_KEY")

# Method 3: With default values
api_key = os.getenv("OPENAI_API_KEY", "default-key")

πŸ—οΈ Project Structure Best Practices ​

your-langchain-project/
β”œβ”€β”€ .env                          # Environment variables
β”œβ”€β”€ .gitignore                    # Git ignore file
β”œβ”€β”€ requirements.txt              # Python dependencies
β”œβ”€β”€ README.md                     # Project documentation
β”œβ”€β”€ main.py                       # Main application entry
β”œβ”€β”€ config/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ settings.py               # Configuration management
β”‚   └── prompts.py                # Prompt templates
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ chains/                   # Custom chains
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ qa_chain.py
β”‚   β”‚   └── summarization_chain.py
β”‚   β”œβ”€β”€ agents/                   # Custom agents
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   └── research_agent.py
β”‚   β”œβ”€β”€ tools/                    # Custom tools
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   └── calculator.py
β”‚   └── utils/                    # Utility functions
β”‚       β”œβ”€β”€ __init__.py
β”‚       β”œβ”€β”€ document_loader.py
β”‚       └── vector_store.py
β”œβ”€β”€ data/                         # Data files
β”‚   β”œβ”€β”€ documents/
β”‚   └── vector_stores/
β”œβ”€β”€ notebooks/                    # Jupyter notebooks
β”‚   β”œβ”€β”€ exploration.ipynb
β”‚   └── experiments.ipynb
β”œβ”€β”€ tests/                        # Test files
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ test_chains.py
β”‚   └── test_agents.py
└── logs/                         # Application logs
    └── app.log

πŸ“‹ Configuration Management ​

python
# config/settings.py
import os
from dataclasses import dataclass
from typing import Optional

@dataclass
class LLMConfig:
    provider: str = "openai"
    model: str = "gpt-3.5-turbo"
    temperature: float = 0.7
    max_tokens: int = 1000
    api_key: Optional[str] = None

@dataclass
class VectorStoreConfig:
    provider: str = "chroma"
    persist_directory: str = "./vector_store"
    collection_name: str = "documents"

@dataclass
class AppConfig:
    debug: bool = False
    log_level: str = "INFO"
    llm: LLMConfig = LLMConfig()
    vector_store: VectorStoreConfig = VectorStoreConfig()

def load_config() -> AppConfig:
    """Load configuration from environment variables"""
    config = AppConfig()
    
    # LLM configuration
    config.llm.api_key = os.getenv("OPENAI_API_KEY")
    config.llm.temperature = float(os.getenv("TEMPERATURE", "0.7"))
    config.llm.max_tokens = int(os.getenv("MAX_TOKENS", "1000"))
    
    # App configuration
    config.debug = os.getenv("DEBUG", "false").lower() == "true"
    config.log_level = os.getenv("LOG_LEVEL", "INFO")
    
    return config

πŸ” Development Tools Setup ​

πŸ“Š LangSmith (Monitoring & Debugging) ​

LangSmith provides debugging, testing, and monitoring for LangChain applications.

bash
pip install langsmith
python
# Enable LangSmith tracing
import os

os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-langsmith-key"
os.environ["LANGCHAIN_PROJECT"] = "your-project-name"

# Your LangChain code will now be automatically traced
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

chain = ChatPromptTemplate.from_template("Tell me about {topic}") | ChatOpenAI()
result = chain.invoke({"topic": "Python"})  # This will be traced

πŸ§ͺ Testing Setup ​

bash
pip install pytest pytest-asyncio
python
# tests/test_chains.py
import pytest
from src.chains.qa_chain import create_qa_chain

@pytest.fixture
def mock_llm():
    """Mock LLM for testing"""
    class MockLLM:
        def invoke(self, input_text):
            return "Mocked response"
    return MockLLM()

def test_qa_chain(mock_llm):
    """Test Q&A chain functionality"""
    chain = create_qa_chain(mock_llm)
    result = chain.invoke({"question": "What is Python?"})
    assert "Mocked response" in result

πŸ“ Logging Setup ​

python
# utils/logger.py
import logging
import os
from datetime import datetime

def setup_logger(name: str = "langchain_app") -> logging.Logger:
    """Set up application logger"""
    logger = logging.getLogger(name)
    
    if not logger.handlers:
        # Create logs directory if it doesn't exist
        os.makedirs("logs", exist_ok=True)
        
        # File handler
        file_handler = logging.FileHandler(
            f"logs/{name}_{datetime.now().strftime('%Y%m%d')}.log"
        )
        file_handler.setLevel(logging.INFO)
        
        # Console handler
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.DEBUG)
        
        # Formatter
        formatter = logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        )
        file_handler.setFormatter(formatter)
        console_handler.setFormatter(formatter)
        
        # Add handlers
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
        logger.setLevel(logging.DEBUG)
    
    return logger

# Usage
logger = setup_logger()
logger.info("Application started")

🐳 Docker Setup ​

πŸ“¦ Dockerfile ​

dockerfile
# Dockerfile
FROM python:3.11-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements and install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create directories
RUN mkdir -p logs data/vector_stores

# Set environment variables
ENV PYTHONPATH=/app
ENV PYTHONUNBUFFERED=1

# Expose port
EXPOSE 8000

# Run application
CMD ["python", "main.py"]

πŸ“‹ docker-compose.yml ​

yaml
# docker-compose.yml
version: '3.8'

services:
  langchain-app:
    build: .
    ports:
      - "8000:8000"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2}
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
    volumes:
      - ./data:/app/data
      - ./logs:/app/logs
    depends_on:
      - redis
      - postgres

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data

  postgres:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=langchain_db
      - POSTGRES_USER=langchain_user
      - POSTGRES_PASSWORD=langchain_pass
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  redis_data:
  postgres_data:

πŸš€ Production Deployment Considerations ​

πŸ”’ Security Best Practices ​

python
# security/secrets_manager.py
import os
from cryptography.fernet import Fernet

class SecretsManager:
    def __init__(self):
        # Use a key from environment or AWS Secrets Manager
        self.key = os.getenv("ENCRYPTION_KEY")
        if not self.key:
            raise ValueError("ENCRYPTION_KEY not found in environment")
        self.cipher = Fernet(self.key.encode())
    
    def encrypt_secret(self, secret: str) -> str:
        """Encrypt a secret"""
        return self.cipher.encrypt(secret.encode()).decode()
    
    def decrypt_secret(self, encrypted_secret: str) -> str:
        """Decrypt a secret"""
        return self.cipher.decrypt(encrypted_secret.encode()).decode()

# Usage
secrets = SecretsManager()
api_key = secrets.decrypt_secret(os.getenv("ENCRYPTED_OPENAI_KEY"))

πŸ“Š Monitoring Setup ​

python
# monitoring/metrics.py
import time
from functools import wraps
from prometheus_client import Counter, Histogram, start_http_server

# Metrics
REQUEST_COUNT = Counter('langchain_requests_total', 'Total requests', ['chain_type'])
REQUEST_DURATION = Histogram('langchain_request_duration_seconds', 'Request duration')

def monitor_chain(chain_type: str):
    """Decorator to monitor chain execution"""
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            start_time = time.time()
            try:
                result = func(*args, **kwargs)
                REQUEST_COUNT.labels(chain_type=chain_type).inc()
                return result
            finally:
                REQUEST_DURATION.observe(time.time() - start_time)
        return wrapper
    return decorator

# Start metrics server
start_http_server(8001)

⚑ Performance Optimization ​

python
# optimization/caching.py
import redis
import json
import hashlib
from typing import Any, Optional

class ResponseCache:
    def __init__(self, redis_host: str = "localhost", redis_port: int = 6379):
        self.redis_client = redis.Redis(host=redis_host, port=redis_port, decode_responses=True)
        self.default_ttl = 3600  # 1 hour
    
    def _generate_key(self, input_data: dict) -> str:
        """Generate cache key from input data"""
        serialized = json.dumps(input_data, sort_keys=True)
        return hashlib.md5(serialized.encode()).hexdigest()
    
    def get(self, input_data: dict) -> Optional[Any]:
        """Get cached response"""
        key = self._generate_key(input_data)
        cached = self.redis_client.get(key)
        return json.loads(cached) if cached else None
    
    def set(self, input_data: dict, response: Any, ttl: int = None) -> None:
        """Cache response"""
        key = self._generate_key(input_data)
        self.redis_client.setex(
            key, 
            ttl or self.default_ttl, 
            json.dumps(response)
        )

# Usage in chains
cache = ResponseCache()

def cached_chain_invoke(chain, input_data):
    # Check cache first
    cached_response = cache.get(input_data)
    if cached_response:
        return cached_response
    
    # Execute chain
    response = chain.invoke(input_data)
    
    # Cache response
    cache.set(input_data, response)
    
    return response

πŸ› οΈ Common Issues & Solutions ​

❌ Common Installation Issues ​

Issue: Package conflicts

bash
# Solution: Use virtual environment
python -m venv fresh_env
source fresh_env/bin/activate
pip install --upgrade pip
pip install langchain

Issue: SSL certificate errors

bash
# Solution: Upgrade certificates
pip install --upgrade certifi
# Or use specific index
pip install --trusted-host pypi.org --trusted-host pypi.python.org langchain

Issue: Memory errors with large models

python
# Solution: Use model quantization
from langchain_community.llms import LlamaCpp

llm = LlamaCpp(
    model_path="path/to/model.gguf",
    n_gpu_layers=0,  # Use CPU
    n_batch=128,     # Smaller batch size
    verbose=False
)

πŸ”§ Configuration Troubleshooting ​

Issue: API key not found

python
# Debug API key loading
import os
from dotenv import load_dotenv

load_dotenv()
print(f"API Key loaded: {'OPENAI_API_KEY' in os.environ}")
print(f"Key starts with: {os.getenv('OPENAI_API_KEY', 'Not found')[:10]}...")

Issue: Import errors

python
# Check installed packages
import pkg_resources
installed_packages = [d.project_name for d in pkg_resources.working_set]
print("LangChain packages:", [p for p in installed_packages if 'langchain' in p])

πŸ“š Next Steps ​

Now that you have LangChain set up, continue your journey:

  1. Language Models - Connect to different AI providers
  2. LCEL Basics - Learn the expression language
  3. Prompt Templates - Master prompt engineering
  4. Hands-on Tutorial - Practice with real examples

Setup Checklist:

  • βœ… Virtual environment created and activated
  • βœ… LangChain and required packages installed
  • βœ… Environment variables configured
  • βœ… API keys tested and working
  • βœ… Project structure organized
  • βœ… Development tools set up
  • βœ… First chain successfully executed

You're now ready to build amazing AI applications with LangChain!

Released under the MIT License.