MCP Tool Reference Low Risk

openai_chat

Send a prompt to OpenAI and receive a response

Part of the OpenAI MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.

WHEN AI AGENTS USE THIS TOOL

AI agents call openai_chat to perform operations in OpenAI. While the risk category is not fully classified, applying a rate limit gives you visibility into how often the tool is called and prevents unexpected bursts of activity from autonomous agents.

WHY ENFORCE A POLICY ON OPENAI_CHAT

Applying a policy to openai_chat gives you an audit trail of every call an AI agent makes. Even for low-risk tools, visibility into agent behaviour helps you debug issues, optimise workflows, and maintain compliance with your organisation's security requirements.

RECOMMENDED POLICY

Apply a rate limit to control usage and monitor for unexpected behaviour.

openai.yaml
tools:
  openai_chat:
    rules:
      - action: allow
        rate_limit:
          max: 60
          window: 60

See the full OpenAI policy for all 1 tools.

DETAILS

Tool Name

openai_chat

Category

Other

MCP Server

OpenAI MCP Server

Risk Level

Low

SIMILAR OTHER TOOLS ON OTHER SERVERS

RELATED READING

FREQUENTLY ASKED QUESTIONS

What does the openai_chat tool do?

Send a prompt to OpenAI and receive a response. It is categorised as a Other tool in the OpenAI MCP Server, which means it performs auxiliary operations.

How do I enforce a policy on openai_chat?

Add a rule in your Intercept YAML policy under the tools section for openai_chat. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the OpenAI MCP server.

What risk level is openai_chat?

openai_chat is a Other tool with low risk. Read-only tools are generally safe to allow by default.

Can I rate-limit openai_chat?

Yes. Add a rate_limit block to the openai_chat rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.

How do I block openai_chat completely?

Set action: deny in the Intercept policy for openai_chat. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.

What MCP server provides openai_chat?

openai_chat is provided by the OpenAI MCP server (openai-mcp-server). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.

ENFORCE POLICIES ON OPENAI

Open source. One binary. Zero dependencies.