Mapping my car's history with InfluxDB and VersaTiles

My Homeassistant instance tracks my car's GPS position via device_tracker.car_location and stores it in InfluxDB. Homeassistant uses the Renault integration which talks to the same API I used for downloading my charge history. I wanted to see all historic positions on a map -- not just the current one in Homeassistant.

img1

The positions are only updated when a journey ends, so the data shows parking locations, not routes. Still a lot of parking positions are stored in InfluxDB.

Querying InfluxDB with Flux

The InfluxDB query uses pivot to combine the separate latitude and longitude field rows into a single row per timestamp:

from(bucket: "home_assistant")
  |> range(start: 2025-10-01T00:00:00Z)
  |> filter(fn: (r) => r._measurement == "device_tracker.car_location")
  |> filter(fn: (r) => r._field == "latitude" or r._field == "longitude")
  |> pivot(rowKey: ["_time"], columnKey: ["_field"], valueColumn: "_value")

After a few months this returns tens of thousands of rows. Server-side deduplication in Python groups them by rounded coordinates into a much smaller set of unique locations with a visit count.

I tried doing the aggregation in Flux directly (group + count), but it timed out on my Raspberry Pi 4. The Python-side dedup is fast enough -- the bottleneck is InfluxDB scanning the rows (~2 seconds).

VersaTiles as map provider

Trying VersaTiles was on my list after I attended a talk about it last summer at FrosCon. It's an open-source stack serving OpenStreetMap vector tiles at tiles.versatiles.org. So no need to selfhost anything. But I could if I want to.

On the frontend it works with MapLibre GL JS. The @versatiles/style npm package provides ready-made map styles. Both are loaded as ES modules from jsdelivr:

<script type="module">
  import maplibregl from 'https://cdn.jsdelivr.net/npm/maplibre-gl@5/+esm';
  import { colorful } from 'https://cdn.jsdelivr.net/npm/@versatiles/style@5/+esm';

  const style = colorful({ baseUrl: 'https://tiles.versatiles.org', language: 'de' });
  const map = new maplibregl.Map({ container: 'map', style });
</script>

Important: the baseUrl parameter is required, otherwise sprites and fonts resolve as relative URLs against my own server.

The backend

The whole backend is a single Flask file with one route for the API and has only two dependencies: flask and influxdb-client.

The positions are shown as red dots with a heatmap overlay -- the heatmap weight is based on how often the car was seen at each location. Hovering a dot shows the count.

Full Code (index.html + main.py) in a gist.

Using uv workspaces

I have a pet project that has a build folder to build the database with a cli and an app folder with everything needed to run a Datasette based readonly website. Both folders have a pyproject.toml and the app is deployed with a custom Dockerfile that uses uv.

To migrate them to use uv workspaces I added a toplevel pyproject.toml. The uv docs state that every workspace needs a root which is itself a member, so the root gets both a [project] table and the [tool.uv.workspace] one:

[project]
name = "k-workspace"
version = "0.1.0"
requires-python = ">=3.12"

[tool.uv.workspace]
members = ["app", "build"]

The sub folders have their own pyproject.toml -- unchanged. For reference, app/pyproject.toml looks roughly like this (and build/pyproject.toml is analogous with name = "k-build"):

[project]
name = "k-app"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
    "datasette>=1.0a",
    "datasette-cluster-map",
    # ...
]

The name field here is what you pass to uv run --package later. But we can remove the uv.lock files in both. And create a new lock file for in the toplevel:

rm app/uv.lock build/uv.lock
uv lock

This produces one uv.lock at the root that resolves dependencies across both packages. Running uv lock from a subdirectory still works — uv detects the workspace and uses the root lockfile.

To run a tool from a specific package, use --package:

uv run --package k-app datasette ...
uv run --package k-build build/cli.py ...

uv run auto-syncs the target package's dependencies into the root .venv before executing, so you don't need to run uv sync first.

One thing to watch out for: plain uv sync at the root does not install the members' dependencies -- it syncs to whatever the root pyproject.toml declares (nothing, since the root package has no dependencies of its own). It will also uninstall any packages in the .venv that don't match, so running uv sync after uv sync --all-packages strips the environment back down. Not a disaster -- the next uv run --package re-installs from uv's local cache in a second or two -- but surprising if you didn't expect it. If you want both members installed at once (for example to poke around interactively), use:

uv sync --all-packages

Since each Dockerfile only copies its own subdirectory, there's no parent pyproject.toml visible inside the container. Uv treats the sub-package as a standalone project and resolves independently. No Dockerfile changes needed.

For only two sub projects this seems like not worth it, but I still like having only one .venv folder and uv.lock file.

See also the uv workspaces documentation.

Serena usage statistics from log files

Serena is an MCP server that gives AI coding agents semantic understanding of your codebase. Instead of reading raw files, the agent can navigate symbols, find references, rename across the project, and use language-server-backed tools -- all through the Model Context Protocol. I use it with Claude Code, where it runs as a local MCP server alongside each session.

Serena writes detailed logs to ~/.serena/logs/, organized by date. Each session produces a text file containing timestamps, tool calls with parameters, results, and execution times. There is no built-in way to get usage statistics from these logs, but the format is structured enough that a simple Python script can extract useful insights. I asked Claude Code to build one, and after a few iterations this is what came out.

The log format

Each log file corresponds to one MCP server session. The filename encodes the start time and process ID: mcp_20260325-175343_2802043.txt. Inside, every line follows Python's standard logging format:

INFO  2026-03-25 17:54:45,042 [MainThread] mcp.server.lowlevel.server:_handle_request:720 - Processing request of type CallToolRequest
INFO  2026-03-25 17:54:45,043 [MainThread] serena.task_executor:issue_task:192 - Scheduling Task-2:ReadMemoryTool
INFO  2026-03-25 17:54:45,046 [Task-2:ReadMemoryTool] serena.tools.tools_base:_log_tool_application:222 - read_memory: memory_name='some-memory-name'
INFO  2026-03-25 17:54:45,047 [Task-2:ReadMemoryTool] serena.task_executor:stop:336 - Task-2:ReadMemoryTool completed in 0.001 seconds

Each tool call goes through the same lifecycle: CallToolRequest -> Scheduling Task-N:ToolName -> parameters logged -> result logged -> completed in X seconds. A handful of regular expressions is enough to extract tool names, timestamps, durations, and result sizes from this.

What the script reports

Sessions where Serena started but no tool was ever called are filtered out -- these are just idle MCP server instances.

Overview and daily activity:

============================================================
  Serena Usage Overview
============================================================
  Total sessions:   15 (63 idle skipped)
  Total tool calls: 94
  Serena versions:  0.1.4 (44 builds)

============================================================
  Sessions per Day
============================================================
  2026-03-22    2 sessions     6 tool calls  ######
  2026-03-25    4 sessions    44 tool calls  ############################################

Tool usage -- which Serena tools are called most often, with average and maximum execution times. GetSymbolsOverview and FindSymbol dominate, which makes sense -- they are the primary way an agent explores code structure before diving into specifics.

============================================================
  Tool Usage (top 20)
============================================================
  GetSymbolsOverview               28  avg 0.064s  max 0.354s       ############################
  FindSymbol                       27  avg 0.077s  max 0.638s       ###########################
  SearchForPattern                 17  avg 0.229s  max 2.940s       #################
  ListDir                           9  avg 0.007s  max 0.029s       #########
  FindFile                          9  avg 0.010s  max 0.027s       #########

Result sizes -- since every tool result is logged, the script can measure the character count per response. This approximates how much context window each tool consumes. FindSymbol and SearchForPattern are the expensive ones, while GetSymbolsOverview stays compact at ~360 chars per call.

============================================================
  Result Sizes (context window cost)
============================================================
  Total result data: 135.4k chars

  FindSymbol                        73.6k total  avg   2.7k  max  16.6k  (27 calls)
  SearchForPattern                  47.3k total  avg   2.8k  max  24.2k  (17 calls)
  GetSymbolsOverview                10.2k total  avg    363  max   3.4k  (28 calls)
  ListDir                            3.0k total  avg    335  max   1.6k  (9 calls)

The script also reports projects, session durations, language server startup times (Vue: 4s avg, Python: 0.3s), failed tool calls (9% failure rate, mostly FindSymbol hitting missing files), and hourly activity.