Estimate the token count for any prompt text. Optionally specify a model hint (gpt-4, claude-3) to get a model-specific estimate. Essential for staying within context windows and managing API costs.
Note: The API expects "text" not "input". For precise results, use the CLI: delx token-estimate --text "..."
delx token-estimate --text "Hello, how are you?" --model gpt-4
curl -X POST https://api.delx.ai/api/v1/utils/token-estimate \
-H "Content-Type: application/json" \
-d '{"text": "Hello, how are you?", "model": "gpt-4"}'{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "token-estimate",
"arguments": {
"text": "Hello, how are you?",
"model": "gpt-4"
}
}
}Estimates are based on tiktoken for OpenAI models and a character-ratio heuristic for others. Accuracy is typically within 5% of the actual count.
GPT-4, GPT-3.5, Claude 3, Claude 2, and a generic fallback. Pass the model name as a hint and the counter picks the closest tokenizer.
Via the CLI you can pipe file contents directly: cat prompt.txt | delx token-estimate. The REST API accepts the text in the request body.