How AI Rates Images
A plain-language walkthrough of the technology pipeline — built for transparency, privacy, and accuracy.
The Analysis Pipeline
Image Ingestion
Your image is uploaded directly to an encrypted server over TLS 1.3. It is never forwarded to a third-party API, cloud vision service, or human reviewer. The image exists only in memory for the duration of analysis.
Local Vision Model
A fine-tuned vision-language model (VLM) runs entirely on our private GPU servers. This model has been trained on thousands of clinically labelled images to assess physical attributes using standardised measurement conventions.
Structured Output
The model produces a structured JSON object — a set of numerical scores and categorical labels for each attribute. No raw image data is included. This structured output is anonymised and contains zero personally identifiable information.
Report Generation
The structured data is passed to a language model to generate a personalised written report. Crucially, the LLM only sees numbers and labels — never your image. The report contextualises your results against population data from peer-reviewed research.
Image Deletion
After analysis is complete, your image is permanently deleted from memory. We do not store original images. Only your anonymised structured results are retained, and only if you opt in to save them.
Technical Architecture
What Gets Assessed
The model evaluates 36 distinct attributes. Estimated measurements are based on visual proportion analysis and calibrated against our training dataset of 5,000 clinically labelled images. A sample of assessed attributes:
How We Measure Accuracy
The model was trained on a dataset of 5,000 images, each annotated with 36 attributes by multiple reviewers following a standardised labelling protocol. Inter-rater reliability was measured to ensure label consistency before training.
Estimated measurements (length, girth) use a proportion-based approach — not ruler detection — calibrated against known reference points in the training data. Accuracy is expressed as a confidence interval, not a single number.
Limitation transparency
AI visual estimation has inherent limitations. Measurement accuracy depends on image angle, lighting, and framing. Reports include confidence ranges and clearly state where estimates are approximate. We do not present AI output as medical-grade measurement.
Training Data & Ethics
Training data was sourced from consented, anonymised repositories. No personally identifiable information was retained. All images used in training were reviewed for consent and legal compliance before inclusion.
The annotation process was designed to eliminate subjective aesthetic judgment from the labelling pipeline. Annotators assessed physical attributes against objective criteria, not personal preference. Rating scales were pre-defined and calibrated.
← Previous
Does Size Matter?Read next →
Size Statistics