from langchain.prompts import PromptTemplate
= PromptTemplate(
prompt =["history", "user_input"],
input_variables="""
templateYou are a helpful assistant. Use the chat history to keep context.
Chat History:
{history}
User: {user_input}
Assistant:"""
)
LangChain is a Python framework for building applications that use large language models (LLMs) in modular and composable ways. It provides core components like prompt templates, memory, chains, and model wrappers. This post walks through building a basic chatbot that remembers user input using LangChain objects and method calls.
Installation
Install the required packages:
pip install langchain openai
Set your OpenAI API key:
import os
"OPENAI_API_KEY"] = "your-key-here" os.environ[
LangChain Components Used
Object | Purpose |
---|---|
LLM |
Interface to language models (e.g., OpenAI, HuggingFace, Anthropic). |
PromptTemplate |
Structure and format inputs to the LLM. |
OutputParser |
Parse raw LLM outputs into usable formats (e.g., JSON, numbers). |
Chain |
Composable unit combining inputs, prompt, LLM, and output parser. |
Tool |
Wraps arbitrary functions for agent use (e.g., calculator, web search). |
Agent |
Dynamically selects tools and makes decisions to solve tasks. |
Retriever |
Retrieves documents from a vector store or search index. |
Document |
Basic unit of text + metadata (used in RAG). |
Memory |
Stores chat history or intermediate data between runs. |
VectorStore |
Stores and retrieves embeddings (e.g., FAISS, Chroma, Pinecone). |
Each component has methods that follow standard object-oriented patterns. You can inspect, override, and compose them as needed.
Step-by-Step Example
Define the Prompt Template
Set Up Conversation Memory
from langchain.memory import ConversationBufferMemory
= ConversationBufferMemory(memory_key="history") memory
Create the LLM
from langchain.chat_models import ChatOpenAI
= ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.5) llm
Build The Chain
from langchain.chains import LLMChain
= LLMChain(llm=llm, prompt=prompt, memory=memory) chat_chain
Run the Chatbot
= chat_chain.run("Hi, my name is Ravi")
response1 = chat_chain.run("What's my name?")
response2
print(response1)
print(response2)
Summary
This example uses LangChain’s core abstractions to implement a chatbot that preserves conversational state. By combining prompt templating, language models, and memory into a chain, you can create more coherent and interactive applications. The same pattern extends to document retrieval, tool use, planning, and multi-agent systems.