Features
5 min read

Voice AI

Last updated May 4, 2026

Talk through findings hands-free during a walkthrough. Hometrace transcribes, classifies, and routes each observation into the right place in your report.

Voice AI lets you talk through what you see as you walk a property. Hometrace transcribes the audio, then uses AI to figure out whether you're updating a spec, adding a finding, or just narrating, and routes each observation to the right section, subsection, and (when relevant) the right spec field. The goal is to keep your hands and eyes on the inspection while still producing a structured report.

What you'll need

  • The Hometrace mobile app on iOS or Android. See Using the Mobile App.
  • A live inspection in progress.
  • Microphone permission granted on first use.

How it works

When you tap the microphone, the app records audio locally. When you tap Done, that audio goes to Hometrace's server, where two things happen:

  1. Transcription. The audio is transcribed by a state-of-the-art speech-to-text model tuned for natural, conversational dictation. It handles inspector jargon, code references, and the usual background noise of a walkthrough fine.
  2. Parsing. The transcript is then run through an AI parser along with a structured snapshot of your current location in the report (template, areas, subsections, specs in scope, your comment library). The parser classifies what you said into one of three buckets and writes structured output.

You'll see a "Transcribing… Analyzing notes & finding items" status while this runs (typically 2–6 seconds), then a "Review what we heard" sheet with the parsed results for you to confirm or edit before they're saved.

The three things voice can do

The AI decides per-utterance which of these applies:

  • Spec update sets or changes a value on a spec field. Phrases like "year built: 1985", "the roof material is asphalt shingle", "occupancy: vacant", or just "asphalt" said while a relevant spec is in scope. The wizard updates the field directly; no finding is created.
  • Spec-scoped finding adds a note attached to a specific spec field. Phrases like "add a note to the occupancy spec that the home is vacant but fully furnished" or "for the year-built field, the appraiser disagrees". The note appears under that spec's notes section.
  • Subsection-level finding adds a finding (deficiency, general note, or limitation) at the subsection level. Phrases like "the GFCI outlet under the kitchen sink isn't tripping when I press test" or "add a note that the laundry door doesn't latch".

If a finding matches an entry in your template's comment library, Hometrace links the finding to that template note (so any standardized language and recommendations come through). If nothing matches, you get a free-form finding with the description you dictated.

Steps

  1. Open the inspection on the Hometrace mobile app.
  2. Tap the microphone icon at the bottom of the screen.
  3. The screen shows a red Listening indicator with a recording timer. Speak naturally. Examples that work well:
    • "In the kitchen, the GFCI outlet under the sink isn't tripping when I press the test button. That's a safety issue."
    • "Year built: 1972. Add a note that the original electrical panel is still in service."
    • "For the roof, foundation: slab, year built: 1980, condition is fair, with crawl-space access." (mixed spec updates)
  4. Tap Done when finished. Hometrace transcribes and parses (a few seconds).
  5. The Review what we heard sheet appears with each parsed item as a row. Toggle anything off you don't want, edit text inline, or accept all and save.

Tips for good dictation

  • Lead with location. "In the basement, the…" or "For the master bath, the…" gives the AI a clear routing signal. You can dictate findings for areas you're not currently in, which is useful for "while I'm here, also note that…" callbacks.
  • Severity if it matters. Words like "safety issue", "deficiency", or "FYI" / "general note" nudge the type and severity classification. Otherwise, the AI defaults to its best guess.
  • Spec field updates can be terse. "Vacant" said while looking at the Occupancy spec is enough; you don't need to repeat the field name.
  • One thought at a time is fine, but you can also batch. A single dictation can produce multiple findings, multiple spec updates, or a mix.
  • Background noise is OK; loud machinery is not. The model handles HVAC hum, traffic, and distant conversation well. Standing in front of running power tools or directly under a barking dog will hurt accuracy. Move a few feet.

Reviewing and correcting

The confirmation sheet is the safety net. Every parsed item shows up before anything is saved:

  • Toggle individual items off if the AI extracted something you didn't mean.
  • Edit any field inline: item name, description, severity, target subsection.
  • Reroute to a different area or subsection if it picked the wrong one. The wizard learns from corrections.
  • Cancel to throw the whole batch away (for example, if a phone call interrupted you mid-thought).

Limits and notes

  • Recording length. There's no hard time cap on the client, but the audio upload is capped at 6 MB. In practice that's a few minutes of speech, which is well past where you'd want to chunk dictations anyway.
  • Network required for processing. The transcription and parsing run server-side, so dictation needs connectivity at the moment you tap Done. The recording itself can be made offline; the app will queue parsing when you're back online.
  • Privacy. Audio is sent to a transcription service for processing, and the resulting text is parsed by an AI service. Audio files are not retained after parsing. See our Privacy Policy.

Next step

Open the Hometrace mobile app on your device and try voice on your next walkthrough.