Delx

Token Counter

Estimate the token count for any prompt text. Optionally specify a model hint (gpt-4, claude-3) to get a model-specific estimate. Essential for staying within context windows and managing API costs.

>_Token CounterLIVE API

Note: The API expects "text" not "input". For precise results, use the CLI: delx token-estimate --text "..."

Usage

CLI

delx token-estimate --text "Hello, how are you?" --model gpt-4

REST (curl)

curl -X POST https://api.delx.ai/api/v1/utils/token-estimate \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello, how are you?", "model": "gpt-4"}'

MCP (JSON-RPC)

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "token-estimate",
    "arguments": {
      "text": "Hello, how are you?",
      "model": "gpt-4"
    }
  }
}

FAQ

How accurate is the token estimate?

Estimates are based on tiktoken for OpenAI models and a character-ratio heuristic for others. Accuracy is typically within 5% of the actual count.

Which models are supported?

GPT-4, GPT-3.5, Claude 3, Claude 2, and a generic fallback. Pass the model name as a hint and the counter picks the closest tokenizer.

Can I count tokens for files?

Via the CLI you can pipe file contents directly: cat prompt.txt | delx token-estimate. The REST API accepts the text in the request body.

Related

Related Tools

← All ToolsCLI Docs