Your Time, Every Modality — Desktop, Camera, Document
Most time tracking tools give you one input: a timer button. We think that’s backwards. Your work happens across screens, documents, conversations, and even handwritten notes — so time capture should meet you wherever the work is.
This week we shipped three new capture modalities:
The desktop agent runs a lightweight heartbeat process that groups application-level activity signals into time entries using configurable legal billing increments (6-min / 0.1hr rounding). No screen recording, no screenshots, no surveillance — just process-level metadata interpreted by AI. For attorneys and consultants who live in desktop applications, this closes the biggest gap in automated time tracking: the hours spent outside email and calendar.
Photo-to-time-entry via MMS — snap a picture of handwritten notes, a whiteboard, or a receipt and text it to TimeSentry. The image is processed through our multimodal vision model, which extracts structured time data (client, matter, duration, narrative) and creates a draft entry. This is the same inference stack that powers our email and calendar mappers, extended to unstructured visual input. For professionals who still take pen-and-paper notes in client meetings, this eliminates the transcription step entirely.
Word plugin document-change tracking — the Office add-in now monitors document mutation events and auto-saves time entries as you draft. Combined with the desktop heartbeat, this gives near-complete coverage of document-heavy workflows — brief drafting, contract review, memo writing — without any manual logging.
The AI agent’s tool-calling layer also gained access to your task and project graph, so conversational time management now resolves against real project structure rather than fuzzy name matching. The result: fewer corrections, faster approvals.