Using uv workspaces

I have a pet project that has a build folder to build the database with a cli and an app folder with everything needed to run a Datasette based readonly website. Both folders have a pyproject.toml and the app is deployed with a custom Dockerfile that uses uv.

To migrate them to use uv workspaces I added a toplevel pyproject.toml. The uv docs state that every workspace needs a root which is itself a member, so the root gets both a [project] table and the [tool.uv.workspace] one:

[project]
name = "k-workspace"
version = "0.1.0"
requires-python = ">=3.12"

[tool.uv.workspace]
members = ["app", "build"]

The sub folders have their own pyproject.toml -- unchanged. For reference, app/pyproject.toml looks roughly like this (and build/pyproject.toml is analogous with name = "k-build"):

[project]
name = "k-app"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
    "datasette>=1.0a",
    "datasette-cluster-map",
    # ...
]

The name field here is what you pass to uv run --package later. But we can remove the uv.lock files in both. And create a new lock file for in the toplevel:

rm app/uv.lock build/uv.lock
uv lock

This produces one uv.lock at the root that resolves dependencies across both packages. Running uv lock from a subdirectory still works — uv detects the workspace and uses the root lockfile.

To run a tool from a specific package, use --package:

uv run --package k-app datasette ...
uv run --package k-build build/cli.py ...

uv run auto-syncs the target package's dependencies into the root .venv before executing, so you don't need to run uv sync first.

One thing to watch out for: plain uv sync at the root does not install the members' dependencies -- it syncs to whatever the root pyproject.toml declares (nothing, since the root package has no dependencies of its own). It will also uninstall any packages in the .venv that don't match, so running uv sync after uv sync --all-packages strips the environment back down. Not a disaster -- the next uv run --package re-installs from uv's local cache in a second or two -- but surprising if you didn't expect it. If you want both members installed at once (for example to poke around interactively), use:

uv sync --all-packages

Since each Dockerfile only copies its own subdirectory, there's no parent pyproject.toml visible inside the container. Uv treats the sub-package as a standalone project and resolves independently. No Dockerfile changes needed.

For only two sub projects this seems like not worth it, but I still like having only one .venv folder and uv.lock file.

See also the uv workspaces documentation.

Serena usage statistics from log files

Serena is an MCP server that gives AI coding agents semantic understanding of your codebase. Instead of reading raw files, the agent can navigate symbols, find references, rename across the project, and use language-server-backed tools -- all through the Model Context Protocol. I use it with Claude Code, where it runs as a local MCP server alongside each session.

Serena writes detailed logs to ~/.serena/logs/, organized by date. Each session produces a text file containing timestamps, tool calls with parameters, results, and execution times. There is no built-in way to get usage statistics from these logs, but the format is structured enough that a simple Python script can extract useful insights. I asked Claude Code to build one, and after a few iterations this is what came out.

The log format

Each log file corresponds to one MCP server session. The filename encodes the start time and process ID: mcp_20260325-175343_2802043.txt. Inside, every line follows Python's standard logging format:

INFO  2026-03-25 17:54:45,042 [MainThread] mcp.server.lowlevel.server:_handle_request:720 - Processing request of type CallToolRequest
INFO  2026-03-25 17:54:45,043 [MainThread] serena.task_executor:issue_task:192 - Scheduling Task-2:ReadMemoryTool
INFO  2026-03-25 17:54:45,046 [Task-2:ReadMemoryTool] serena.tools.tools_base:_log_tool_application:222 - read_memory: memory_name='some-memory-name'
INFO  2026-03-25 17:54:45,047 [Task-2:ReadMemoryTool] serena.task_executor:stop:336 - Task-2:ReadMemoryTool completed in 0.001 seconds

Each tool call goes through the same lifecycle: CallToolRequest -> Scheduling Task-N:ToolName -> parameters logged -> result logged -> completed in X seconds. A handful of regular expressions is enough to extract tool names, timestamps, durations, and result sizes from this.

What the script reports

Sessions where Serena started but no tool was ever called are filtered out -- these are just idle MCP server instances.

Overview and daily activity:

============================================================
  Serena Usage Overview
============================================================
  Total sessions:   15 (63 idle skipped)
  Total tool calls: 94
  Serena versions:  0.1.4 (44 builds)

============================================================
  Sessions per Day
============================================================
  2026-03-22    2 sessions     6 tool calls  ######
  2026-03-25    4 sessions    44 tool calls  ############################################

Tool usage -- which Serena tools are called most often, with average and maximum execution times. GetSymbolsOverview and FindSymbol dominate, which makes sense -- they are the primary way an agent explores code structure before diving into specifics.

============================================================
  Tool Usage (top 20)
============================================================
  GetSymbolsOverview               28  avg 0.064s  max 0.354s       ############################
  FindSymbol                       27  avg 0.077s  max 0.638s       ###########################
  SearchForPattern                 17  avg 0.229s  max 2.940s       #################
  ListDir                           9  avg 0.007s  max 0.029s       #########
  FindFile                          9  avg 0.010s  max 0.027s       #########

Result sizes -- since every tool result is logged, the script can measure the character count per response. This approximates how much context window each tool consumes. FindSymbol and SearchForPattern are the expensive ones, while GetSymbolsOverview stays compact at ~360 chars per call.

============================================================
  Result Sizes (context window cost)
============================================================
  Total result data: 135.4k chars

  FindSymbol                        73.6k total  avg   2.7k  max  16.6k  (27 calls)
  SearchForPattern                  47.3k total  avg   2.8k  max  24.2k  (17 calls)
  GetSymbolsOverview                10.2k total  avg    363  max   3.4k  (28 calls)
  ListDir                            3.0k total  avg    335  max   1.6k  (9 calls)

The script also reports projects, session durations, language server startup times (Vue: 4s avg, Python: 0.3s), failed tool calls (9% failure rate, mostly FindSymbol hitting missing files), and hourly activity.

Claude Code statusline

The Claude Code statusline is a customizable bar at the bottom of the terminal that runs a shell script after each assistant message. Claude Code sends JSON session data to stdin, and whatever the script prints becomes the status bar content -- ANSI colors included.

My setup displays the current model, a context window progress bar, and rate limit information when available.

Configuration

The statusline is enabled in ~/.claude/settings.json:

{
  "statusLine": {
    "type": "command",
    "command": "bash ~/.claude/statusline-command.sh"
  }
}

The script at ~/.claude/statusline-command.sh:

#!/usr/bin/env bash
input=$(cat)

CYN='\033[1;36m' GRN='\033[0;32m' YLW='\033[0;33m' MAG='\033[0;35m' RST='\033[0m'

eval "$(echo "$input" | jq -r '
  @sh "model=\(.model.display_name // "Claude")",
  @sh "used=\(.context_window.used_percentage // "")",
  @sh "rl_pct=\(.rate_limits.five_hour.used_percentage // .rate_limits.seven_day.used_percentage // "")",
  @sh "rl_reset=\(.rate_limits.five_hour.resets_at // .rate_limits.seven_day.resets_at // "")",
  @sh "rl_label=\(if .rate_limits.five_hour.used_percentage then "5h" elif .rate_limits.seven_day.used_percentage then "7d" else "" end)"
')"

printf "%b%b" "$CYN" "$model"

if [ -n "$used" ]; then
  used_int=$(printf '%.0f' "$used")
  filled=$((used_int / 5))
  bar=$(printf "%${filled}s" | tr ' ' '#')$(printf "%$((20 - filled))s" | tr ' ' '-')
  printf "%b  📊 %b[%b%s%b]%b %s%%" "$RST" "$RST" "$GRN" "$bar" "$RST" "$YLW" "$used_int"
fi

if [ -n "$rl_pct" ]; then
  rl_int=$(printf '%.0f' "$rl_pct")
  reset_time=""
  [ -n "$rl_reset" ] && reset_time=$(date -d "@${rl_reset}" +%H:%M 2>/dev/null || date -r "${rl_reset}" +%H:%M)
  printf "  %b⏱️%s:%s%%%b" "$MAG" "$rl_label" "$rl_int" "$RST"
  [ -n "$reset_time" ] && printf "  %s" "$reset_time"
fi

printf "%b" "$RST"

What it shows

The script parses the JSON that Claude Code pipes in and displays up to three pieces of information on a single line:

  1. Model name in bold cyan (e.g. "Opus" or "Sonnet")

  2. Context window usage as a 20-character progress bar (# for filled, - for empty) with a percentage

  3. Rate limit usage for the 5-hour or 7-day window, including the reset time -- this only appears when the data is available (Claude Pro/Max subscriptions)

Before the first API response, only the model name is shown because context and rate limit fields are still null.

statusline

How it works

Claude Code runs the configured command after each assistant message and on permission mode changes. The full session state is sent as JSON to stdin -- the documentation lists all available fields. The script uses jq to extract what it needs and printf with ANSI escape codes to produce colored output.

One thing to note: the // empty fallback in jq is important. Fields like used_percentage and the rate limit data are null before the first API call, so without the fallback the script would print "null" in the status bar.

The /statusline slash command can generate a script from a natural language description, but I used Claude Code itself to iterate on the script until I was happy with the output.