@traceabledecorator: recommended for most casestracecontext manager: Python onlyRunTreeAPI: explicit, low-level control
- Specifying a custom run ID, which is useful for attaching feedback immediately after a run or correlating with external systems.
- Ensuring all traces are submitted before your process exits.
If you’re using an LLM provider or agent framework with a built-in LangSmith integration, refer to the integrations overview instead
Prerequisites
Before tracing, set the following environment variables:-
LANGSMITH_TRACING=true: enables tracing. Set this to toggle tracing on and off without changing your code.LANGSMITH_TRACINGcontrols the@traceabledecorator and thetracecontext manager. To override this at runtime for@traceablewithout changing environment variables, usetracing_context(enabled=True/False)(Python) or passtracingEnableddirectly totraceable(JS/TS).RunTreeobjects are not affected by any of these controls; they always send data to LangSmith when posted. -
LANGSMITH_API_KEY: your LangSmith API key. -
By default, LangSmith logs traces to a project named
default. To log to a different project, setLANGSMITH_PROJECT. For more details, refer to Log traces to a specific project.
Use @traceable / traceable
Apply @traceable (Python) or traceable (TypeScript) to any function to make it a traced run. LangSmith handles context propagation across nested calls automatically.
The following example traces a simple pipeline: run_pipeline calls format_prompt to build the messages, invoke_llm to call the model, and parse_output to extract the result.
Each function is individually traced, and because they’re called from within run_pipeline (also traced), LangSmith automatically nests them as child runs. invoke_llm uses run_type="llm" to mark it as an LLM call so LangSmith can render token counts and latency correctly:
run_pipeline trace with format_prompt, invoke_llm, and parse_output as nested child runs.
When you wrap a sync function with
traceable (e.g., formatPrompt in the previous example), use the await keyword when calling it to ensure the trace is logged correctly.Use the trace context manager (Python only)
In Python, you can use the trace context manager to log traces to LangSmith. This is useful in situations where:
- You want to log traces for a specific block of code.
- You want control over the inputs, outputs, and other attributes of the trace.
- It is not feasible to use a decorator or wrapper.
- Any or all of the above.
traceable decorator and wrap_openai wrapper, so you can use them together in the same application.
The following example shows all three used together. wrap_openai wraps the OpenAI client so its calls are traced automatically. my_tool uses @traceable with run_type="tool" and a custom name to appear correctly in the trace. chat_pipeline itself is not decorated; instead, ls.trace wraps the call, letting you pass the project name and inputs explicitly and set outputs manually via rt.end():
Use the RunTree API
Another, more explicit way to log traces to LangSmith is via the RunTree API. This API allows you more control over your tracing. You can manually create runs and children runs to assemble your trace. You still need to set your LANGSMITH_API_KEY, but LANGSMITH_TRACING is not necessary for this method.
This method is not recommended for most use cases; manually managing trace context is error-prone compared to @traceable, which handles context propagation automatically.
Example usage
You can extend the utilities explained in the previous section to trace any code. The following code shows some example extensions. Trace any public method in a class:Specify a custom run ID
By default, LangSmith assigns a random ID to each run. You can override this when you need to know the run ID ahead of time (for example, to attach feedback immediately after a run), correlate LangSmith runs with IDs from an external system, or make runs idempotent using a deterministic ID.Use UUID v7 for custom run IDs. UUIDv7 embeds a timestamp, which preserves correct time-ordering of runs in a trace. The LangSmith SDK exports a
uuid7 helper (Python v0.4.43+, JS v0.3.80+):- Python:
from langsmith import uuid7 - JS/TS:
import { uuid7 } from 'langsmith'
-
@traceable: passrun_idinsidelangsmith_extrawhen calling a@traceablefunction (Python), or passidin the config object passed totraceable(TypeScript): -
tracecontext manager (Python only): Passrun_iddirectly to the trace context manager constructor:Python
Ensure all traces are submitted before exiting
LangSmith performs tracing in a background thread to avoid obstructing your production application. This means that your process may end before all traces are successfully posted to LangSmith. Refer to the following options:- If you are using LangChain, refer to the LangChain tracing guide.
-
If you are using the LangSmith SDK standalone, you can use the
flushmethod before exit:
Related
- Observability concepts: background on runs, traces, and the LangSmith data model
- Run (span) data format: schema reference for run fields including
dotted_order,trace_id, andparent_run_id - Log user feedback using the SDK: common use case for pre-specifying a run ID
- Access the current run (span) within a traced function: read or modify the active run from inside a trace
- Log traces to a specific project: route traces to a named project instead of
default - Trace with API: low-level REST API alternative to the SDK
- Tracing Basics video from the Introduction to LangSmith Course
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

