Introduction
Traditional customer support chatbots are trained on a fixed knowledge base that quickly becomes outdated. Pricing changes, new features launch, policies update, and competitors evolve — but the bot's answers stay frozen in time. By integrating real-time web search into your support AI, you create a system that always has current information.
In this article, we walk through the architecture and implementation of an AI customer support system powered by Keiro's search API.
Why Web Search for Customer Support?
- Always current: Product updates, pricing changes, and new features are instantly reflected in search results
- Competitor awareness: When customers ask "How do you compare to X?", the bot can provide accurate, current comparisons
- Industry knowledge: Support agents can answer questions about industry trends and best practices
- Reduced maintenance: No need to manually update the knowledge base every time something changes
Architecture
Our support system has three knowledge sources, checked in order:
- Internal knowledge base (documentation, FAQs) — checked first for product-specific questions
- Keiro web search — used for current information, competitor comparisons, and industry questions
- LLM general knowledge — fallback for general questions
Implementation
The Support Agent Class
import requests
from openai import OpenAI
class SupportAgent:
def __init__(self, keiro_key: str, openai_key: str, company_docs: list[dict]):
self.keiro_key = keiro_key
self.keiro_base = "https://kierolabs.space/api"
self.llm = OpenAI(api_key=openai_key)
self.company_docs = company_docs # Your internal knowledge base
def classify_query(self, message: str) -> str:
"""Classify the type of support query."""
response = self.llm.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": (
"Classify the customer support query into one of these categories:\n"
"- product: About our product features, usage, or documentation\n"
"- pricing: About our pricing, plans, or billing\n"
"- comparison: Comparing us to competitors\n"
"- industry: About industry trends or best practices\n"
"- general: General greetings or unrelated questions\n"
"Return only the category name."
)},
{"role": "user", "content": message}
],
max_tokens=20
)
return response.choices[0].message.content.strip().lower()
def search_internal_docs(self, query: str) -> str:
"""Search internal documentation (simplified)."""
# In production, this would be a vector search over your docs
relevant = []
query_terms = query.lower().split()
for doc in self.company_docs:
if any(term in doc["content"].lower() for term in query_terms):
relevant.append(doc["content"][:500])
return "\n\n".join(relevant[:3]) if relevant else ""
def search_web(self, query: str) -> str:
"""Search the web using Keiro."""
resp = requests.post(f"{self.keiro_base}/search", json={
"apiKey": self.keiro_key,
"query": query
})
results = resp.json().get("results", [])
return "\n\n".join([
f"Source: {r.get('title', 'N/A')} ({r.get('url', '')})\n{r.get('content', r.get('snippet', ''))}"
for r in results[:4]
])
def get_answer(self, query: str) -> str:
"""Use Keiro /answer for direct answers."""
resp = requests.post(f"{self.keiro_base}/answer", json={
"apiKey": self.keiro_key,
"query": query
})
return resp.json().get("response", "")
The Main Response Logic
def respond(self, message: str, conversation_history: list = None) -> dict:
"""Generate a support response."""
category = self.classify_query(message)
context = ""
sources = []
if category in ["product", "pricing"]:
# Check internal docs first
internal = self.search_internal_docs(message)
if internal:
context = f"Internal Documentation:\n{internal}"
else:
# Fall back to web search for current info
web_context = self.search_web(f"site:yourcompany.com {message}")
context = f"Web Results:\n{web_context}"
elif category == "comparison":
# Use web search for competitive information
web_context = self.search_web(message)
context = f"Competitive Research:\n{web_context}"
elif category == "industry":
# Use Keiro /answer for industry questions
answer = self.get_answer(message)
if answer:
return {"response": answer, "category": category, "sources": []}
# Generate the response
system_prompt = (
"You are a helpful customer support agent. Be friendly, accurate, and concise. "
"If you have search results or documentation, use them to answer accurately. "
"Always be honest about what you know and do not know. "
"Offer to connect the customer with a human agent for complex issues."
)
messages = [{"role": "system", "content": system_prompt}]
if conversation_history:
messages.extend(conversation_history)
if context:
messages.append({"role": "system", "content": f"Reference material:\n{context}"})
messages.append({"role": "user", "content": message})
response = self.llm.chat.completions.create(
model="gpt-4o",
messages=messages,
temperature=0.3
)
return {
"response": response.choices[0].message.content,
"category": category,
"used_web_search": bool(context and "Web" in context)
}
Handling Common Support Scenarios
Scenario 1: "What is your pricing?"
The agent checks internal docs for pricing information. If docs are outdated, it falls back to searching the company website for current pricing.
Scenario 2: "How do you compare to CompetitorX?"
The agent uses Keiro web search to find current information about the competitor, then generates a fair comparison. This ensures the comparison reflects the competitor's latest features and pricing.
Scenario 3: "What is the industry standard for X?"
The agent uses Keiro's /answer endpoint to provide a sourced answer about industry standards, ensuring the information is current.
Pre-Computed Answers with Batch Processing
For frequently asked questions, pre-compute answers using Keiro's free batch processing:
import requests
# Pre-compute answers for your top 100 FAQs - FOR FREE
faqs = [
"How do I reset my password?",
"What are your pricing plans?",
"How do I integrate with Slack?",
# ... 97 more FAQs
]
response = requests.post("https://kierolabs.space/api/batch-search", json={
"apiKey": "your-keiro-api-key",
"queries": faqs
})
# Store results in your database for instant retrieval
batch_results = response.json()["results"]
Cost Analysis for Customer Support
| Metric | Without Web Search | With Keiro |
|---|---|---|
| Monthly support conversations | 50,000 | 50,000 |
| Queries needing web search | 0 | ~15,000 (30%) |
| Keiro cost | $0 | $14.99 (Essential plan) |
| Customer satisfaction | Baseline | +25% (current, accurate answers) |
| Knowledge base maintenance | 40 hours/month | 10 hours/month |
The $14.99/month for Keiro's Essential plan covers 50,000 requests — more than enough for the web search component. The real ROI comes from reduced maintenance time and improved customer satisfaction.
Production Considerations
- Response time: Web search adds 400-600ms of latency. For chat interfaces, show a "searching..." indicator while the search runs.
- Caching: Keiro's 50% cache discount kicks in automatically for repeated queries, which is common in support (many customers ask similar questions).
- Fallback handling: Always have a graceful fallback if the search API is unavailable. The bot should still work with internal docs and LLM knowledge.
- Human handoff: Design clear escalation paths for questions the bot cannot answer confidently.
- Logging: Log all web searches and their results for quality review and improvement.
Conclusion
Adding real-time web search to customer support AI creates a system that stays current without constant manual maintenance. Keiro's affordable pricing and diverse endpoints make it practical for support teams of any size. The combination of internal documentation, web search, and LLM generation creates a support experience that is accurate, current, and helpful.
Build smarter support with Keiro. Start at kierolabs.space from $5.99/month.