from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["history", "user_input"],
template="""
You are a helpful assistant. Use the chat history to keep context.
Chat History:
{history}
User: {user_input}
Assistant:"""
)LangChain is a Python framework for building applications that use large language models (LLMs) in modular and composable ways. It provides core components like prompt templates, memory, chains, and model wrappers. This post walks through building a basic chatbot that remembers user input using LangChain objects and method calls.
Installation
Install the required packages:
pip install langchain openaiSet your OpenAI API key:
import os
os.environ["OPENAI_API_KEY"] = "your-key-here"LangChain Components Used
| Object | Purpose |
|---|---|
LLM |
Interface to language models (e.g., OpenAI, HuggingFace, Anthropic). |
PromptTemplate |
Structure and format inputs to the LLM. |
OutputParser |
Parse raw LLM outputs into usable formats (e.g., JSON, numbers). |
Chain |
Composable unit combining inputs, prompt, LLM, and output parser. |
Tool |
Wraps arbitrary functions for agent use (e.g., calculator, web search). |
Agent |
Dynamically selects tools and makes decisions to solve tasks. |
Retriever |
Retrieves documents from a vector store or search index. |
Document |
Basic unit of text + metadata (used in RAG). |
Memory |
Stores chat history or intermediate data between runs. |
VectorStore |
Stores and retrieves embeddings (e.g., FAISS, Chroma, Pinecone). |
Each component has methods that follow standard object-oriented patterns. You can inspect, override, and compose them as needed.
Step-by-Step Example
Define the Prompt Template
Set Up Conversation Memory
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="history")Create the LLM
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.5)Build The Chain
from langchain.chains import LLMChain
chat_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)Run the Chatbot
response1 = chat_chain.run("Hi, my name is Ravi")
response2 = chat_chain.run("What's my name?")
print(response1)
print(response2)Summary
This example uses LangChain’s core abstractions to implement a chatbot that preserves conversational state. By combining prompt templating, language models, and memory into a chain, you can create more coherent and interactive applications. The same pattern extends to document retrieval, tool use, planning, and multi-agent systems.