groq_chat
Ultra-fast LLM inference with chat models
Part of the Groq MCP server. Enforce policies on this tool with Intercept, the open-source MCP proxy.
WHEN AI AGENTS USE THIS TOOL
AI agents call groq_chat to perform operations in Groq. While the risk category is not fully classified, applying a rate limit gives you visibility into how often the tool is called and prevents unexpected bursts of activity from autonomous agents.
WHY ENFORCE A POLICY ON GROQ_CHAT
Applying a policy to groq_chat gives you an audit trail of every call an AI agent makes. Even for low-risk tools, visibility into agent behaviour helps you debug issues, optimise workflows, and maintain compliance with your organisation's security requirements.
RECOMMENDED POLICY
Apply a rate limit to control usage and monitor for unexpected behaviour.
tools:
groq_chat:
rules:
- action: allow
rate_limit:
max: 60
window: 60 See the full Groq policy for all 7 tools.
DETAILS
MORE GROQ TOOLS
groq_vision Other groq_transcribe Other groq_text_to_speech Other groq_batch_process Other groq_compound Other groq_translate_audio Write SIMILAR OTHER TOOLS ON OTHER SERVERS
RELATED READING
FREQUENTLY ASKED QUESTIONS
What does the groq_chat tool do?
Ultra-fast LLM inference with chat models. It is categorised as a Other tool in the Groq MCP Server, which means it performs auxiliary operations.
How do I enforce a policy on groq_chat?
Add a rule in your Intercept YAML policy under the tools section for groq_chat. You can allow, deny, rate-limit, or validate arguments. Then run Intercept as a proxy in front of the Groq MCP server.
What risk level is groq_chat?
groq_chat is a Other tool with low risk. Read-only tools are generally safe to allow by default.
Can I rate-limit groq_chat?
Yes. Add a rate_limit block to the groq_chat rule in your Intercept policy. For example, setting max: 10 and window: 60 limits the tool to 10 calls per minute. Rate limits are tracked per agent session and reset automatically.
How do I block groq_chat completely?
Set action: deny in the Intercept policy for groq_chat. The AI agent will receive a policy violation error and cannot call the tool. You can also include a reason field to explain why the tool is blocked.
What MCP server provides groq_chat?
groq_chat is provided by the Groq MCP server (groq-mcp). Intercept sits as a proxy in front of this server to enforce policies before tool calls reach the server.
ENFORCE POLICIES ON GROQ
Open source. One binary. Zero dependencies.