myhairline.ai uses Google's Gemini Vision multimodal AI combined with MediaPipe's 468-point facial landmark detection to analyze hair loss patterns from a single photo. The system identifies your Norwood stage, measures hairline recession, detects vertex thinning, and generates graft count estimates without any clinic visit or specialized equipment.
myhairline.ai uses 468 MediaPipe facial landmarks for precision measurement.
This article is for informational purposes only and does not constitute medical advice.
The Two AI Systems Working Together
myhairline.ai combines two distinct AI technologies, each handling a different part of the analysis.
MediaPipe Facial Landmarks
MediaPipe is Google's open-source framework for building multimodal AI pipelines. The face mesh module detects 468 individual landmarks on the human face in real time. These landmarks map the precise geometry of facial features including the forehead, brow ridge, and temples.
For hair loss analysis, the critical landmarks are:
- Forehead boundary points: Define where the forehead meets the hairline
- Temple landmarks: Map the lateral hairline position on both sides
- Brow ridge points: Establish the baseline for measuring forehead height
- Facial proportion points: Calculate the golden ratio (1.618) relationship between hairline and other facial features
MediaPipe processes your photo locally in the browser. No image data leaves your device during this step. The landmark coordinates are extracted as numerical data points.
Gemini Vision Multimodal Analysis
Google Gemini Vision is a large multimodal model capable of understanding images and text together. myhairline.ai sends your photo to Gemini Vision along with structured prompts that direct the model to:
- Classify the hair loss pattern against the Norwood scale (stages 1-7, including 3V)
- Assess the density gradient from the hairline to the vertex
- Identify miniaturization zones where hair is thinning but not yet gone
- Detect asymmetric loss patterns that may indicate non-androgenetic causes
- Evaluate the donor area density from available angles
Gemini Vision processes the image holistically, recognizing patterns it learned from training data that includes medical dermatology images, clinical hair loss documentation, and trichoscopy references.
How the Analysis Pipeline Works
The full analysis runs in three phases:
Phase 1: Landmark Detection (Client-Side)
When you upload or capture a photo, MediaPipe runs in your browser and extracts the 468 facial landmarks. The system calculates:
| Measurement | What It Tells Us |
|---|---|
| Hairline-to-brow distance | Whether the forehead is enlarged (suggesting recession) |
| Temple recession depth | Bilateral measurement of how far the hairline has receded at the temples |
| Forehead height vs golden ratio | Comparison against the ideal 6.5 cm male / 5.5 cm female forehead height |
| Left-right symmetry | Whether hair loss is symmetric (typical of androgenetic alopecia) or asymmetric (may suggest other causes) |
Phase 2: Visual Classification (Server-Side)
The photo and extracted landmarks are sent to Gemini Vision for Norwood classification. The model evaluates:
- Overall hair loss pattern against the seven Norwood stages
- Hair density visible in the photo (areas where scalp shows through)
- The transition zone between fully-haired and thinning regions
- Vertex (crown) status from top-down or angled photos
Phase 3: Report Generation
The landmark measurements and Gemini classification are combined to produce a report containing:
- Norwood stage (1-7, including 3V)
- Graft count estimate matched to the stage (e.g., NW3 = 1,500-2,200 grafts)
- Cost projections by region (Turkey $1-2/graft, USA $4-6/graft, UK $3-5/graft)
- Treatment recommendations based on clinical guidelines for the identified stage
- Facial landmark visualization showing the analyzed points
Why Gemini Vision Over Other Models
Gemini Vision was selected for myhairline.ai for several reasons:
Multimodal native design: Gemini Vision was built from the ground up to process images and text together, unlike models that bolt image understanding onto a text-only base. This means it interprets visual hair loss patterns with the same depth that it processes the associated medical context.
High-resolution image processing: Hair loss analysis requires fine detail. Individual hair shafts, miniaturization zones, and scalp visibility all matter. Gemini Vision handles high-resolution inputs without losing important details during compression.
Structured output capability: The model generates structured JSON responses that map directly to the Norwood scale, graft tables, and treatment databases. This produces consistent, reproducible results across thousands of analyses.
Speed: Analysis completes in under 60 seconds, including upload, processing, and report generation. This speed makes it practical as a screening tool rather than requiring scheduled appointments.
Accuracy and Limitations
The system performs well for clear-cut Norwood stages (2, 3, 5, 6, 7) where the visual pattern is distinct. Borderline cases (the gap between stage 3 and 3V, or between 4 and 5) are more challenging because these stages overlap visually.
Factors that improve accuracy:
- Well-lit, high-resolution photos from the phone's main camera
- Front-facing angle with forehead fully visible
- Dry, unstyled hair without products that obscure the scalp
- Additional top-down photo for vertex assessment
Factors that reduce accuracy:
- Low light or harsh shadows across the scalp
- Wet or styled hair that masks thinning
- Hats, headbands, or hair accessories covering the hairline
- Selfie camera (lower resolution, wider angle distortion)
The AI analysis is a screening tool. It does not replace clinical examination with a densitometer, scalp laxity test, or in-person surgeon evaluation. Use it as an informed starting point. See the Norwood scale guide to understand what your stage means for treatment planning.
Privacy and Data Handling
Photos uploaded to myhairline.ai are processed for analysis and not stored permanently. MediaPipe landmark detection runs entirely in your browser. The Gemini Vision API call processes the image for classification and does not retain it. No account is required, no email is collected, and no personal data is stored.
Try the AI hair loss analysis tool now to see how Gemini Vision evaluates your hair loss pattern.
FAQ
Is AI-based Norwood staging accurate?
myhairline.ai uses 468 MediaPipe facial landmarks combined with Gemini Vision multimodal analysis to classify Norwood stages. Accuracy depends on photo quality, lighting, and angle. The system performs best with well-lit, front-facing photos using the phone's main camera. It serves as a reliable screening tool, though in-person clinical assessment remains the gold standard.
Do I need an account to use the tools?
No. myhairline.ai runs entirely in your browser. You take or upload a photo, the analysis happens, and you receive your report. There is no account creation, no email collection, and no cost. The tool works on any device with a camera and an internet connection.
How does myhairline.ai compare to a clinical assessment?
myhairline.ai provides an AI-generated Norwood stage, graft estimate, and treatment suggestions within 60 seconds. A clinical assessment adds hands-on scalp evaluation, donor density measurement with specialized instruments, and a personalized surgical plan. The AI report is a strong pre-consultation screening tool that helps you arrive at a clinic informed and prepared.
Get your free AI analysis now at myhairline.ai/analyze. See exactly how Gemini Vision evaluates your hair loss in under 60 seconds.