How to Build a Real-Time AI Search Agent Using Gemini 3 and LangChain

Share it with your senior IT friends and colleagues
Reading Time: 3 minutes

Namaste and Welcome to Build It Yourself.


Large Language Models (LLMs) are powerful, but they all suffer from one big limitation:
they cannot answer questions about events that happened after their training data cutoff.

Even the newest models like Gemini 3 Pro, may give outdated answers for current events.

In this guide, we’ll build a Search-Enabled AI Agent that solves this problem.
The agent uses Gemini 3 for reasoning and SERPAPI (Google Search API) for fresh information.

By the end, you’ll be able to ask real-time questions like “Who is the current president of the US?” and get up-to-date answers, not stale model knowledge.


Colab Notebook: https://github.com/tayaln/AI-Search-Agent-using-Gemini3

Why this AI Search Agent?

LLMs are pre-trained. They do not learn continuously.
So even if the model is new, its knowledge may be 8–12 months old.

Example:
When we ask Gemini 3 Pro:
“Who is the current president of the US?”
The model replies: Joe Biden which is outdated.

To fix this, we give the LLM access to Google Search.
Whenever a question requires fresh knowledge, the agent can call this search tool and return updated results.


Tools We’ll Use

To build this real-time agent, we use:

  • Gemini 3 Pro Preview → our core LLM
  • LangChain → agent framework
  • LangChain Google GenAI → Gemini integration
  • SERPAPI → Google Search results
  • LangGraph → agent execution
  • Prompt Templates → define the agent’s thinking format

You’ll need API keys for:

  • Google Gemini API
  • SERPAPI

These are stored in environment variables before the agent is run.


Step 1: Install Required Libraries

We install all LangChain components and Gemini integrations.

!pip install langchain langchain-core langchain-community langgraph langchain-google-genai serpapi

Step 2: Import Everything Needed

from langchain.agents import create_react_agent
from langchain_core.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_community.utilities import SerpAPIWrapper
from langgraph.prebuilt import create_agent_executor
import os

Step 3: Load API Keys

os.environ["GOOGLE_API_KEY"] = userdata.get("GOOGLE_API_KEY")
os.environ["SERPAPI_API_KEY"] = userdata.get("SERPAPI_API_KEY")

Step4: Configure the LLM

We use temperature = 0 for factual, stable answers.

llm = ChatGoogleGenerativeAI(
    model="gemini-3-pro-preview",
    temperature=0
)

Step 5: Define the Search Tool

search_tool = {
    "name": "search",
    "description": "Use this tool to search the web for up-to-date information.",
    "func": SerpAPIWrapper().run
}

Step 6: Create the Agentic Prompt

This prompt instructs the model to:

  1. Think about the question
  2. Decide whether search is needed
  3. Call the search tool (max once)
  4. Evaluate whether results are current
  5. Return a final verified answer
prompt = PromptTemplate.from_template("""
You are a real-time search assistant.

You have access to the following tool:
- search: Use it to find current information (news, events, dates).

Use the search tool at most once.
If search results seem outdated, say "search results outdated", then answer using your best knowledge.

Follow this format:
Question: {{question}}
Thought:
Action:
Action Input:
Observation:
Self-Reflection:
Final Answer:
""")

Step 7: Create the Agent

agent = create_react_agent(llm, [search_tool], prompt)
executor = create_agent_executor(agent, verbose=True)

Setting verbose=True shows the inner reasoning steps (thoughts, actions, tool calls).


Step 8: Ask a Real-Time Question

Query:

executor.invoke({"question": "Who is the current president of the US?"})

What Happens Internally:

  • The LLM checks its internal knowledge (outdated).
  • It decides to call the search tool.
  • Tool fetches Google Search results (via SERPAPI).
  • Agent analyzes the result.
  • Final answer is produced.

Output:

Donald Trump is the current President of the United States (2025).

The agent gives the correct real-time answer, not the LLM’s outdated knowledge.


What We Achieved

We built a complete search-enabled AI assistant that:

✓ Uses Gemini 3 for reasoning
✓ Uses Google Search for real-time updates
✓ Follows the ReAct pattern (Reason + Act)
✓ Provides highly accurate answers for current-affairs questions

This solves one of the biggest limitations of LLMs.


Why This Matters

With this pattern, you can now build:

  • News-aware chatbots
  • Stock price assistants
  • Real-time education bots
  • AI research assistants
  • Up-to-date FAQ/chat systems

Any LLM can become live with just one tool call.


Conclusion

We started with a pre-trained model that could not answer a simple current-affairs question.
By adding a search tool and building an agent, we transformed it into a real-time intelligent assistant.

This is the core idea behind AI Agents, combining model intelligence with external tools.

Connect with me on LinkedIn – https://www.linkedin.com/in/nikhileshtayal/

Customized AI + LLM Coaching for Senior IT Professionals

In case you are looking to learn AI + Gen AI in an instructor-led live class environment, check out these dedicated courses for senior IT professionals here

Pricing for AI courses for senior IT professionals – https://www.aimletc.com/ai-ml-etc-course-offerings-pricing/

Happy learning!

Share it with your senior IT friends and colleagues
Nikhilesh Tayal
Nikhilesh Tayal
Articles: 142