Custom Tools for Agents: Writing and Exposing Safe Python Tools

Custom Tools for Agents: Writing and Exposing Safe Python Tools

Custom Tools for Agents: Writing and Exposing Safe Python Tools

LangChain Tools, Custom Python Tools ,Secure Agent Tools ,Agent Integration

LangChain Tools, Custom Python Tools ,Secure Agent Tools ,Agent Integration

LangChain Tools, Custom Python Tools ,Secure Agent Tools ,Agent Integration

Nov 18, 2025

Nov 18, 2025

Nov 18, 2025

Custom Tools for Agents: Writing and Exposing Safe Python Tools

Tags: LangChain Tools, Custom Python Tools ,Secure Agent Tools ,Agent Integration

AI agents are powerful tools, but with great power comes great responsibility. When integrating custom tools into an agentic system, particularly with frameworks like LangChain, it's critical to ensure that these tools are secure and reliable. Whether your agent is web scraping, making API calls, or transforming data, safe execution is a non-negotiable requirement.

This article will guide you through creating secure custom Python tools for AI agents. We'll cover best practices for tool integration, sandboxing, and input validation, with a concrete example of a safe web-scraper tool. By the end of this article, you’ll be equipped to build and expose custom agent tools without compromising security or reliability.

1. What Are Custom Tools for AI Agents?

In the context of AI agents, custom tools extend an agent’s capabilities. These tools are usually functions or services that the agent can invoke to perform tasks beyond basic reasoning, such as:

  • Web scraping

  • Data transformation

  • Database queries

  • Sending HTTP requests

  • Interacting with external APIs

Custom tools are essential in a multi-agent system where one agent’s action depends on another's output or the need for external information. For example, a data validation agent might invoke a web-scraping tool to fetch real-time data, while an aggregator agent might call a custom tool to query a database for historical context.

While these tools empower agents to perform complex tasks, security is a critical concern. Exposing unverified or unsafe tools could allow malicious inputs to break your system, leak sensitive data, or even run arbitrary code.

2. Why Security Matters: Risks and Considerations

Before diving into writing custom tools, let’s outline the security risks you must address:

  • Arbitrary Code Execution: A poorly sandboxed tool could execute harmful code provided by the user or another agent.

  • Insecure Input Handling: Custom tools could be vulnerable to SQL injection, command injection, or malformed input if inputs aren’t validated properly.

  • Sensitive Data Leaks: Tools interacting with external APIs or databases may inadvertently expose sensitive user data.

  • Denial of Service (DoS): Tools that make external API calls might be vulnerable to rate-limiting or timeouts, potentially blocking further execution or flooding resources.

By adhering to security best practices in sandboxing, input validation, and output handling, you can mitigate these risks and create safe, effective custom tools.

3. Creating Safe Custom Python Tools for LangChain Agents

Now that we understand the importance of security, let’s explore how to write safe Python tools for AI agents. Below is a general approach, which includes setting up a tool object, validating input, and isolating execution environments.

a. Tool Objects in LangChain

In LangChain, a Tool is an abstraction that allows agents to interact with external systems or functions. Tools in LangChain are defined as classes that implement the run() method.

Here’s a simple tool object pattern:

from langchain.tools import BaseTool

class CustomTool(BaseTool):
    def _run(self, query: str):
        # This is where the custom logic happens
        return f"Processed: {query}"

When you create a custom tool, ensure that you:

  1. Define clear input/output contracts: The tool should specify exactly what data it expects and what it will return.

  2. Add safeguards: For instance, before performing a web scrape or API call, verify the query input and ensure it adheres to expected formats.

b. Sandboxing Custom Tools

Sandboxing is an essential security measure when running potentially risky code. It involves isolating the execution environment of a custom tool to limit its scope and prevent unwanted side effects (like accessing sensitive data, running destructive commands, or making unauthorized network calls).

There are several ways to sandbox Python code:

  • Virtual environments: Isolate dependencies and minimize the risk of global package interference.

  • Restricted execution: Use libraries like restrictedpython to limit the functions that the tool can call.

  • API key restrictions: If the tool makes external API requests, ensure that API keys or authentication tokens are scoped with limited access.

For instance, when implementing a web scraper, we can restrict which websites it can crawl by validating the domain name:

import requests
from urllib.parse import urlparse

class SafeWebScraper(BaseTool):
    def _run(self, query: str):
        # Validate the URL domain
        parsed_url = urlparse(query)
        if parsed_url.netloc not in ["trusteddomain.com", "anothersecure.com"]:
            raise ValueError("Untrusted domain")

        # Execute the safe scraping logic
        response = requests.get(query)
        return response.text

This ensures that the web scraper only scrapes trusted sources, preventing potential attacks from malicious websites.

c. Input Validation

One of the most common vectors for exploiting custom tools is improper input validation. Ensure that all inputs are validated before they are passed to your tool’s core logic.

For example, let’s say your tool receives an integer input for processing data. You should ensure the input is a valid integer, and if it’s not, raise an error.

Here’s an example of input validation:

class DataProcessor(BaseTool):
    def _run(self, query: str):
        # Validate that the input is a valid integer
        try:
            number = int(query)
        except ValueError:
            raise ValueError("Input must be an integer")

        # Perform the data processing
        return f"Processed number: {number * 2}"

You can also use regular expressions for pattern matching or custom validators (e.g., checking for URL format, numerical range, etc.).

d. Handling Outputs Safely

Once your custom tool produces an output, you must ensure that it’s safe to return to the agent. This includes:

  • Sanitizing outputs: Ensure no harmful scripts or malicious content are included in the output (e.g., avoid returning raw HTML or scripts).

  • Limit data exposure: Avoid returning sensitive data unless absolutely necessary. Use data sanitization techniques to remove any unwanted information from the response.

4. Example: Safe Web-Scraper Tool

Here’s a safe web scraper tool example, incorporating the practices we’ve discussed: sandboxing, input validation, and safe output handling.

import requests
from urllib.parse import urlparse
from langchain.tools import BaseTool

class SafeWebScraper(BaseTool):
    def _run(self, query: str):
        # Validate the input query is a URL
        parsed_url = urlparse(query)
        if not parsed_url.scheme or not parsed_url.netloc:
            raise ValueError("Invalid URL format")

        # Restrict domains to trusted sources
        trusted_domains = ["example.com", "trustedsource.com"]
        if parsed_url.netloc not in trusted_domains:
            raise ValueError("Untrusted domain: Only trusted domains are allowed.")

        # Scrape the data
        try:
            response = requests.get(query, timeout=5)
            response.raise_for_status()  # Will raise an exception for HTTP errors
        except requests.exceptions.RequestException as e:
            raise RuntimeError(f"Error during web scraping: {e}")

        # Return sanitized text content from the page
        return response.text.strip()  # Strip to prevent unwanted trailing spaces or characters

Key Points:

  • URL Validation: Ensures the URL is well-formed and checks it against trusted domains.

  • Error Handling: Catches network-related errors (e.g., timeout, 404 errors).

  • Output Sanitization: Strips any unwanted characters from the response text before returning it.

5. Exposing Custom Tools to Agents

Once your tool is safe and reliable, you can expose it to your agent system. LangChain provides seamless integration of custom tools into agent workflows. Here's an example of how to add your custom tool to an agent:

from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.agents import Tool

# Initialize LLM (e.g., OpenAI)
llm = OpenAI(model="text-davinci-003")

# Initialize Custom Tools
web_scraper = SafeWebScraper()

# Create a tool list
tools = [Tool(name="Web Scraper", func=web_scraper.run, description="Scrapes data from a trusted website")]

# Initialize Agent with tools
agent = initialize_agent(tools, llm, agent_type="zero-shot-react-description", verbose=True)

# Run the agent with a query
result = agent.run("Fetch the latest news from example.com")
print(result)

In this setup:

  • The SafeWebScraper is initialized and exposed to the agent.

  • The agent uses the tool to fetch data from a trusted source.


Custom tools can significantly extend the capabilities of AI agents, enabling them to interact with external systems, process complex data, and automate tasks. However, with this flexibility comes the need for security.

By following the principles of sandboxing, input validation, and output sanitization, you can safely write and expose Python tools for LangChain agents. These techniques will help you maintain the integrity and security of your agent systems while empowering them with custom functionality.

Kozker Tech

Kozker Tech

Kozker Tech

Start Your Data Transformation Today

Book a free 60-minute strategy session. We'll assess your current state, discuss your objectives, and map a clear path forward—no sales pressure, just valuable insights

Copyright Kozker. All right reserved.

Start Your Data Transformation Today

Book a free 60-minute strategy session. We'll assess your current state, discuss your objectives, and map a clear path forward—no sales pressure, just valuable insights

Copyright Kozker. All right reserved.

Start Your Data Transformation Today

Book a free 60-minute strategy session. We'll assess your current state, discuss your objectives, and map a clear path forward—no sales pressure, just valuable insights

Copyright Kozker. All right reserved.