Features
6 min read

Photo Analysis AI

Last updated May 4, 2026

Stream photos from your phone and let Hometrace pre-fill suggested findings, spec values, and photo placements for your review.

When you upload inspection photos, Hometrace's Photo Analysis AI scans them for visible defects, code violations, and safety concerns, then pre-fills suggested findings, spec values, and photo placements in the right sections of your report. You stay in the driver's seat: every suggestion is yours to accept, edit, or dismiss.

What you'll need

  • The Hometrace mobile app or web dashboard
  • A few inspection photos (one will work; more is better)

How it works

Photo analysis runs in two paths, both manually triggered. The choice is about scope:

  • Bulk analysis runs when you trigger it on the inspection as a whole. It walks every subsection in your report, looks at all the photos attached, and produces suggestions across the entire inspection in one pass. Run this once you're done capturing photos for the inspection (or when you want to re-analyze after adding more).
  • Inline analysis runs when you trigger it on a single spec or finding. It's faster, scoped to that one item, and useful when you've added a few new photos mid-write-up and want suggestions just for them.

Both paths share the same AI vision pipeline:

  1. Photo fetch and encode. Hometrace pulls each photo from secure cloud storage and prepares it for analysis (concurrency-capped so a 50-photo inspection doesn't overwhelm the pipeline).
  2. Vision analysis. Photos go to an AI vision model along with the structural context for the section being analyzed: spec list, finding types your template supports, severity levels, and the inspector's existing notes for that area. The model is prompted with inspection-domain knowledge so it knows what to look for in each section type.
  3. Structured output. Instead of free-form text, the model returns a structured response with three kinds of items: spec values (e.g., roof material, foundation type), findings (defects, code concerns, safety issues), and photo attachments (which photo supports which item). Each item carries a confidence score.
  4. Routing and review. The system decides whether each suggestion is high-confidence enough to auto-apply, low-confidence enough to flag for review, or below the floor and discarded.

Confidence and the Needs Review badge

Every suggestion carries a confidence score between 0 and 1. Hometrace bins them into three buckets:

  • High confidence (≥ 0.6): suggestion is created as a pending change you can accept or reject. These are the suggestions you'll see most often: clear defects, unambiguous spec values.
  • Low confidence (0.4–0.6): suggestion is flagged with a Needs Review badge. These are calls the AI isn't sure about: ambiguous photos, partial views, or unfamiliar configurations. Always your judgment.
  • Below 0.4: dropped. Not surfaced to the inspector. The model didn't have enough signal to be useful.

This means the AI is conservative by design. It would rather drop a suggestion than create work for you reviewing low-quality guesses, but it surfaces middle-confidence ones because you're the expert who can quickly say yes or no.

Steps

  1. On a section or subsection in the mobile app, tap the camera icon to take or upload photos.
  2. Photos sync to your dashboard as you shoot. They aren't analyzed yet; analysis is opt-in.
  3. Open the inspection on the web dashboard and trigger analysis. You have two choices:
    • Run bulk analysis on the whole inspection once you've uploaded the photos you want analyzed.
    • Run inline analysis on a single spec or finding when you only want suggestions for that one item.
  4. Suggested findings appear under the relevant sections, with high-confidence ones grouped under each spec and Needs Review items flagged for a closer look.
  5. Review each suggestion: accept it as-is, edit the description or severity, attach or remove photos, or dismiss it.

What it looks for

The AI is prompted to identify:

  • Visible defects like cracked outlets, missing GFCIs, water staining, deteriorated grout, broken seals, wood rot, missing fasteners, exposed wiring.
  • Safety and code concerns like missing handrails, improper venting, double-tapped breakers, blocked egress, missing TPR valves, fuel-burning equipment in living spaces.
  • Spec values that can be read directly from a photo: roof material, foundation type, panel manufacturer, water-heater age (when the data plate is visible), and similar.
  • Photo attachments, meaning which uploaded photos best illustrate each finding or spec value, so you don't have to drag-and-drop them yourself.

What it deliberately doesn't do:

  • It doesn't write the narrative for you in your voice. It produces a draft you tighten.
  • It doesn't make code interpretations as if it were a licensed inspector. It flags concerns; you decide whether they're code violations in your jurisdiction.
  • It doesn't substitute for hands-on testing. A photo of a GFCI outlet doesn't tell anyone whether the device actually trips.

Tips

  • Lots of photos beats fewer photos. The model gets context from everything in the section. A wide shot plus close-ups produces better suggestions than a single hero shot.
  • Lighting matters. Use your device flashlight in dark spaces. Underexposed photos limit what the AI can see and lower its confidence.
  • Capture data plates. Manufacturer/model labels, year tags, AHJ stickers — those produce high-confidence spec auto-fills and save you data entry.
  • Don't over-trust low-confidence suggestions. The Needs Review badge exists for a reason. If something's wrong, dismiss it. The model isn't penalized for over-suggesting at that tier.
  • Re-trigger after adding new photos. If you added photos to a subsection after running bulk analysis, run inline analysis on that subsection's specs or findings to incorporate the new images. You can also run bulk analysis again across the whole inspection.

Privacy

Photos are processed by an AI vision service. Image bytes are sent to that service for the duration of the analysis call and are not retained by the service after processing. See our Privacy Policy for details on how photo data is handled.

Limits

  • A single bulk-analysis run is allowed up to 10 minutes per inspection. Very large inspections (hundreds of photos across dozens of subsections) usually finish well within that window thanks to streaming concurrency, but they're the edge case to watch.
  • Each AI usage counts against your monthly analysis quota. The Payments / Settings page shows your current usage and remaining quota.
  • The model returns text in English. Non-English inspector notes still get processed, but accuracy is best with English-language descriptions and template comments.

Next step

Open your dashboard and start an inspection to try Photo Analysis AI.