| Requirement | What to do | Enables |
|---|---|---|
1. Set run_type="llm" | Pass run_type="llm" to @traceable | LLM-specific rendering, token/cost display |
| 2. Format inputs/outputs | Use OpenAI, Anthropic, or LangChain message format | Structured message rendering, Playground support |
3. Set ls_provider and ls_model_name | Pass both in metadata | Cost tracking, Playground model selection |
| 4. Provide token counts | Set usage_metadata on the run | Token counts and cost calculation |
If you are using LangChain OSS, the OpenAI wrapper, or the Anthropic wrapper, these details are handled automatically.The examples on this page use the
traceable decorator/wrapper (the recommended approach for Python and JS/TS). The same requirements apply if you use the RunTree or API directly.Messages format
When tracing a custom model or a custom input/output format, it must either follow the LangChain format, OpenAI completions format or Anthropic messages format. For more details, refer to the OpenAI Chat Completions or Anthropic Messages documentation. The LangChain format is:Convert custom I/O formats into LangSmith compatible formats
If you’re using a custom input or output format, you can convert it to a LangSmith compatible format usingprocess_inputs/processInputs and process_outputs/processOutputs functions on the @traceable decorator (Python) or traceable function (TS).
process_inputs/processInputs and process_outputs/processOutputs accept functions that allow you to transform the inputs and outputs of a specific trace before they are logged to LangSmith. They have access to the trace’s inputs and outputs, and can return a new dictionary with the processed data.
Here’s a boilerplate example of how to use process_inputs and process_outputs to convert a custom I/O format into a LangSmith compatible format:
Identify a custom model in traces
When using a custom model, it is recommended to also provide the followingmetadata fields to identify the model when viewing traces and when filtering.
ls_provider: The provider of the model, e.g.,"openai","anthropic".ls_model_name: The name of the model, e.g.,"gpt-5.4-mini","claude-3-opus-20240229".
chat_model, you can “reduce” the outputs into the same format as the non-streaming version. This is only supported in Python:
Setting
ls_model_name in your metadata is required for LangSmith to identify the model and calculate costs for custom LLM traces. Without it, token counts may still be recorded but costs won’t be estimated.metadata fields, refer to the Add metadata and tags guide.
Provide token and cost information
Token counts enable cost calculation, which LangSmith displays in the Tracing Projects UI. There are two ways to provide them:- Set
usage_metadataon the run tree: callget_current_run_tree()/getCurrentRunTree()inside your@traceablefunction and set theusage_metadatafield. This does not change your function’s return value. - Return
usage_metadatain the output: includeusage_metadataas a top-level key in the dictionary your function returns.
Supported usage_metadata fields
| Field | Type | Description |
|---|---|---|
input_tokens | int | Total input/prompt tokens |
output_tokens | int | Total output/completion tokens |
total_tokens | int | Sum of input + output (optional, can be inferred) |
input_token_details | object | Breakdown: cache_read, cache_creation, audio, text, image |
output_token_details | object | Breakdown: reasoning, audio, text, image |
input_cost, output_cost, and total_cost fields. For details on configuring model pricing and viewing costs in the UI, refer to the Cost tracking page.
Time-to-first-token
If you are usingtraceable or one of the SDK wrappers, LangSmith will automatically populate time-to-first-token for streaming LLM runs. However, if you are using the RunTree API directly, you will need to add a new_token event to the run tree in order to properly populate time-to-first-token.
Here’s an example:
Related
- Custom instrumentation: core
@traceableandRunTreepatterns. - Access the current run (span) within a traced function: using
get_current_run_tree()to setusage_metadataand other fields at runtime. - Trace OpenAI applications: automatic token and cost tracking when using the OpenAI wrapper.
- Trace Anthropic applications: automatic token and cost tracking when using the Anthropic wrapper.
- Integrations overview: full list of providers and frameworks with built-in LangSmith support.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

