LangSmith uses the concept of a project to group traces. If left unspecified, the project is set to default.You can set the LANGSMITH_PROJECT environment variable to configure a custom project name for an entire application run. Set this before running your application:
export LANGSMITH_PROJECT=my-custom-project
The LANGSMITH_PROJECT flag is only supported in JS SDK versions >= 0.2.16, use LANGCHAIN_PROJECT instead if you are using an older version.
If the project specified does not exist, LangSmith will automatically create it when the first trace is ingested.
You can also set the project name at program runtime in various ways, depending on how you are annotating your code for tracing. This is useful when you want to log traces to different projects within the same application:
Pass the project name at decoration or configuration time.
Override it per individual call.
Set it when constructing a run directly.
Setting the project name dynamically using one of the following methods overrides the project name set by the LANGSMITH_PROJECT environment variable.
import openaifrom langsmith import traceablefrom langsmith.run_trees import RunTreeclient = openai.Client()messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}]# Use the @traceable decorator with the 'project_name' parameter to log traces to LangSmith# Ensure that the LANGSMITH_TRACING environment variables is set for @traceable to work@traceable( run_type="llm", name="OpenAI Call Decorator", project_name="My Project")def call_openai( messages: list[dict], model: str = "gpt-5.4-mini") -> str: return client.chat.completions.create( model=model, messages=messages, ).choices[0].message.content# Call the decorated functioncall_openai(messages)# You can also specify the Project via the project_name parameter# This will override the project_name specified in the @traceable decoratorcall_openai( messages, langsmith_extra={"project_name": "My Overridden Project"},)# The wrapped OpenAI client accepts all the same langsmith_extra parameters# as @traceable decorated functions, and logs traces to LangSmith automatically.# Ensure that the LANGSMITH_TRACING environment variables is set for the wrapper to work.from langsmith import wrapperswrapped_client = wrappers.wrap_openai(client)wrapped_client.chat.completions.create( model="gpt-5.4-mini", messages=messages, langsmith_extra={"project_name": "My Project"},)# Alternatively, create a RunTree object# You can set the project name using the project_name parameterrt = RunTree( run_type="llm", name="OpenAI Call RunTree", inputs={"messages": messages}, project_name="My Project")chat_completion = client.chat.completions.create( model="gpt-5.4-mini", messages=messages,)# End and submit the runrt.end(outputs=chat_completion)rt.post()
If you need to route traces dynamically to different LangSmith workspaces based on runtime configuration (e.g., routing different users or tenants to separate workspaces), the approach differs by language:
Python: use workspace-specific LangSmith clients with tracing_context.
TypeScript: pass a custom client to traceable, or use LangChainTracer with callbacks.
This approach is useful for multi-tenant applications where you want to isolate traces by customer, environment, or team at the workspace level.
Use this approach for general applications where you want to dynamically route traces to different workspaces based on runtime logic (e.g., customer ID, tenant, or environment).Key components:
Initialize separate Client instances for each workspace with their respective workspace_id.
Use tracing_context (Python) or pass the workspace-specific client to traceable (TypeScript) to route traces.
Pass workspace configuration through your application’s runtime config.
import osimport contextlibfrom langsmith import Client, traceable, tracing_context# API key with access to multiple workspacesapi_key = os.getenv("LS_CROSS_WORKSPACE_KEY")# Initialize clients for different workspacesworkspace_a_client = Client( api_key=api_key, api_url="https://api.smith.langchain.com", workspace_id="<YOUR_WORKSPACE_A_ID>" # e.g., "abc123...")workspace_b_client = Client( api_key=api_key, api_url="https://api.smith.langchain.com", workspace_id="<YOUR_WORKSPACE_B_ID>" # e.g., "def456...")# Example: Route based on customer IDdef get_workspace_client(customer_id: str): """Route to appropriate workspace based on customer.""" if customer_id.startswith("premium_"): return workspace_a_client, "premium-customer-traces" else: return workspace_b_client, "standard-customer-traces"@traceabledef process_request(data: dict, customer_id: str): """Process a customer request with workspace-specific tracing.""" # Your business logic here return {"status": "success", "data": data}# Use tracing_context to route to the appropriate workspacedef handle_customer_request(customer_id: str, request_data: dict): client, project_name = get_workspace_client(customer_id) # Everything within this context will be traced to the selected workspace with tracing_context(enabled=True, client=client, project_name=project_name): result = process_request(request_data, customer_id) return result# Example usagehandle_customer_request("premium_user_123", {"query": "Hello"})handle_customer_request("standard_user_456", {"query": "Hi"})
Override default workspace for LangSmith deployments
When deploying agents to LangSmith, you can override the default workspace that traces are sent to by using a graph lifespan context manager. This is useful when you want to route traces from a deployed agent to different workspaces based on runtime configuration passed through the config parameter.
import osimport contextlibfrom typing_extensions import TypedDictfrom langgraph.graph import StateGraphfrom langgraph.graph.state import RunnableConfigfrom langsmith import Client, tracing_context# API key with access to multiple workspacesapi_key = os.getenv("LS_CROSS_WORKSPACE_KEY")# Initialize clients for different workspacesworkspace_a_client = Client( api_key=api_key, api_url="https://api.smith.langchain.com", workspace_id="<YOUR_WORKSPACE_A_ID>")workspace_b_client = Client( api_key=api_key, api_url="https://api.smith.langchain.com", workspace_id="<YOUR_WORKSPACE_B_ID>")# Define configuration schema for workspace routingclass Configuration(TypedDict): workspace_id: str# Define the graph stateclass State(TypedDict): response: strdef greeting(state: State, config: RunnableConfig) -> State: """Generate a workspace-specific greeting.""" workspace_id = config.get("configurable", {}).get("workspace_id", "workspace_a") if workspace_id == "workspace_a": response = "Hello from Workspace A!" elif workspace_id == "workspace_b": response = "Hello from Workspace B!" else: response = "Hello from the default workspace!" return {"response": response}# Build the base graphbase_graph = ( StateGraph(state_schema=State, config_schema=Configuration) .add_node("greeting", greeting) .set_entry_point("greeting") .set_finish_point("greeting") .compile())@contextlib.asynccontextmanagerasync def graph(config): """Dynamically route traces to different workspaces based on configuration.""" # Extract workspace_id from the configuration workspace_id = config.get("configurable", {}).get("workspace_id", "workspace_a") # Route to the appropriate workspace if workspace_id == "workspace_a": client = workspace_a_client project_name = "production-traces" elif workspace_id == "workspace_b": client = workspace_b_client project_name = "development-traces" else: client = workspace_a_client project_name = "default-traces" # Apply the tracing context for the selected workspace with tracing_context(enabled=True, client=client, project_name=project_name): yield base_graph# Usage: Invoke with different workspace configurations# await graph({"configurable": {"workspace_id": "workspace_a"}})# await graph({"configurable": {"workspace_id": "workspace_b"}})
Generic cross-workspace tracing: Use tracing_context (Python) or pass a workspace-specific client to traceable (TypeScript) to dynamically route traces to different workspaces.
LangGraph cross-workspace tracing: For LangGraph applications, use LangChainTracer with the workspace-specific client and attach it via the callbacks parameter.
LangSmith deployment override: Use a graph lifespan context manager (Python) to override the default deployment workspace based on runtime configuration.
Each Client instance maintains its own connection to a specific workspace via the workspaceId parameter.
You can customize both the workspace and project name for each route.
This pattern works with any LangSmith-compatible tracing (LangChain, OpenAI, custom functions, etc.).
When deploying with cross-workspace tracing, ensure your service key or PAT has the necessary permissions for all target workspaces. We recommend using a multi-workspace service key for production deployments. For LangSmith deployments, you must add a service key with cross-workspace access to your environment variables (e.g., LS_CROSS_WORKSPACE_KEY) to override the default service key generated by your deployment.
Write traces to multiple destinations with replicas
Replicas let you send every trace to multiple projects or workspaces at the same time. Unlike the dynamic routing patterns where each trace goes to one destination, replicas duplicate the trace to all configured destinations in parallel.Replicas can be useful for:
Mirror production traces into a staging or personal project for debugging.
Write to multiple workspaces for multi-tenant isolation without changing any application code.
Send traces to the same server under different projects, with per-replica metadata overrides.
Array format: a list of replica objects, useful when you need multiple replicas pointing at the same URL or when you want to set a project_name per replica:
You cannot use LANGSMITH_RUNS_ENDPOINTS alongside LANGSMITH_ENDPOINT. If you set both, LangSmith raises an error. Use only one to configure your endpoint.
You can also pass replicas directly in code, which is useful when destinations vary per request or tenant.
from langsmith import traceable, tracing_contextfrom langsmith.run_trees import WriteReplica, ApiKeyAuth@traceabledef my_pipeline(query: str) -> str: # Your application logic here return f"Answer to: {query}"replicas = [ WriteReplica( api_url="https://api.smith.langchain.com", auth=ApiKeyAuth(api_key="ls__key_workspace_a"), project_name="project-prod", ), WriteReplica( api_url="https://api.smith.langchain.com", auth=ApiKeyAuth(api_key="ls__key_workspace_b"), project_name="project-staging", # Optionally override fields on the replicated run updates={"metadata": {"environment": "staging"}}, ),]with tracing_context(replicas=replicas): my_pipeline("What is LangSmith?")
You can also use the updates field to merge additional fields (such as metadata or tags) into a run for a specific replica only—the primary trace is unchanged. Replica errors are non-fatal: if a replica endpoint is unavailable, LangSmith logs the error without affecting the primary trace.
Auth does not propagate in distributed traces. When a trace spans multiple services, LangSmith forwards replica project_name and updates to downstream services automatically, but not API keys or credentials. Each service must configure its own credentials for replica destinations.
Replicate within the same server (project-only replicas)
If all your replicas use the same LangSmith server, you can omit api_url and auth and specify only a project_name. The SDK reuses the default client credentials:
from langsmith import traceable, tracing_contextfrom langsmith.run_trees import WriteReplica@traceabledef my_pipeline(query: str) -> str: return f"Answer to: {query}"with tracing_context( replicas=[ WriteReplica(project_name="project-prod"), WriteReplica(project_name="project-staging", updates={"metadata": {"env": "staging"}}), ]): my_pipeline("What is LangSmith?")
Route between LangSmith and OpenTelemetry destinations
You can decide at runtime whether a given invocation sends traces to LangSmith, to an OpenTelemetry (OTel) backend, or to both, without redeploying or modifying application logic. This is useful when you want to toggle between observability backends per environment, or simultaneously forward traces from LangSmith to an existing OTel collector.Set the tracing mode using the tracing_mode constructor argument or the LANGSMITH_TRACING_MODE environment variable. Both accept the same values; an explicit tracing_mode argument always takes precedence over the env var:
"langsmith" (default): sends traces natively to LangSmith.
"otel": exports traces as OpenTelemetry spans to a configured OTel backend.
"hybrid" (Python only): sends to both LangSmith and an OTel backend from a single replica.
If you are using the deprecated otel_enabled parameter on Client (Python only), migrate to tracing_mode: Client(otel_enabled=True) → Client(tracing_mode="hybrid"). The otel_enabled parameter will be removed in the next minor version.
Pass a configured Client directly into a replica to apply the desired mode at runtime:
from langsmith import Client, traceable, tracing_contextfrom langsmith.run_trees import WriteReplicafrom langsmith.wrappers import wrap_openaiimport openai# Create clients for different export destinationsls_client = Client() # LangSmith only (default)otel_client = Client(tracing_mode="otel") # OTel backend onlyhybrid_client = Client(tracing_mode="hybrid") # Both LangSmith + OTelopenai_client = wrap_openai(openai.Client())@traceable()def joke(): response = openai_client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Tell me a short joke."}], ) return response.choices[0].message.content# Send this invocation to LangSmith onlywith tracing_context(replicas=[WriteReplica(client=ls_client)]): joke()# Send this invocation to an OTel backend onlywith tracing_context(replicas=[WriteReplica(client=otel_client)]): joke()# Send this invocation to both LangSmith and OTel simultaneouslywith tracing_context(replicas=[WriteReplica(client=hybrid_client)]): joke()
The tracing_mode on each Client determines that replica’s export path. In Python, "hybrid" mode handles both destinations within a single replica. In TypeScript, the “send to both” case uses two separate replicas, one for each client, because there is no "hybrid" mode. Since each replica resolves its own client independently, you can also mix modes within a single tracing_context, for example keeping one replica sending to LangSmith while forwarding the same trace to an OTel collector via a second replica.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.