Blog🔥 Glow Up

Looksmaxxing Without Bone-Smashing — What 4,200 Faces Taught Us

RealSmile Research Team · Facial Analysis Specialists
Updated May 3, 2026
→ See our methodology

4,200 faces analyzed since 2026. The 5 levers AI actually detects, the photo-vs-bone tradeoff most people get backwards, and the safe playbook after the April 2026 TikTok ban.

🔥 Glow Up·9 min read·May 3, 2026

When TikTok banned bone-smashing content in April 2026, the looksmaxxing community lost its loudest (and most dangerous) tactic. What was left? We pulled the measurement data from 4,200 faces analyzed by the RealSmile AI audit engine since launch and asked a simpler question: which structural levers actually move the score, and how much do they move it? The answer reframes a lot of the discourse — and explains why TikTok's ban changed less than people think.

The problem with bone-smashing

Bone-smashing was framed as a do-it-yourself facial restructuring practice claiming that repeated impact to the facial bones would trigger remodeling and produce a sharper jawline or higher cheekbones. The premise borrowed Wolff's law from orthopedic medicine — bone remodels under mechanical load — and applied it to a tissue and loading context where it does not hold. Acute external impact produces fracture, hematoma, and bruising. Chronic low-grade impact has no evidence base in facial bone.

The visible effect practitioners reported was almost entirely soft-tissue swelling, which reverses in weeks and leaves the underlying structure unchanged or in some cases worse. TikTok removed the instructional content under its self-harm policy in April 2026 after months of injury reporting. For coverage of the trend cycle and the medical concerns that drove the policy, see the Northeastern University news writeup.

The structural problem is not just safety — it is that bone-smashing was solving the wrong problem. Most people who think their face is the issue are actually losing score points to skin, body fat, posture, hair, or photo composition. Our data shows this clearly. Below is the breakdown.

What our data shows (4,200 faces analyzed)

Across 4,200 faces, the RealSmile audit engine measures 17 structural metrics plus 4 perception-layer signals on each upload. We log anonymous metric scores (no images, no identifying data) for calibration. The pattern is consistent: the lowest-scoring metric for the median user is not symmetry, not FWHR, not jawline angle. It is one of: skin clarity (28% of users), photo lighting (19%), head angle / camera position (17%), or expression (11%). Combined, these four non-bone factors are the weakest metric for 75% of audited faces.

Bone-related metrics — FWHR, jawline angle, midface ratio, canthal tilt — are the weakest for only 18% of users. And among that 18%, body fat reduction explains most of the visible improvement that follows: the jawline angle metric shifts as facial fat reduces, not because the underlying bone changed.

In other words: bone-smashing was targeting a problem most users do not have, using a method that does not work, while the actual highest-leverage levers were sitting in plain sight on the same face.

Headline finding

75% of audited faces have their weakest metric in skin, lighting, head angle, or expression. Only 18% have a bone-related metric as their weakest. The biggest unlock for most users is not behind their bone — it is in front of the camera.

⚡ Premium AI Dating Photo Audit

Find your weakest metric in 30 seconds.

The free RealSmile audit measures 17 facial geometry metrics plus 4 perception signals. Tells you which one is dragging your score and what the highest-leverage fix is for your specific face.

✓ 5-page personalized PDF · ✓ 21 metrics · ✓ Identity-locked AI glow-up preview · ✓ 7-day refund

The 5 score-movers AI actually detects

The face-detection model that powers the RealSmile audit is built on the face-api 17-landmark engine extended with our own perception layer. It detects geometric structure (symmetry, ratios, angles) plus perception signals (expression warmth, trustworthiness, dominance, attractiveness percentile). Across the dataset, five levers consistently shift the composite score by measurable percentile points, and AI detects each one cleanly.

1. Skin clarity (detected via texture and tone signals). Visible acne, redness, and uneven tone register on the perception layer and pull the composite down 8-14 points. Fix: basic dermatology routine. The audit flags this before any geometry analysis matters.

2. Body-fat-driven jawline metric (detected via jaw angle and chin proportion). The audit cannot directly measure body fat, but it measures the geometric consequences — the jawline angle and the under-chin proportion — which are downstream of body fat. The metric shifts visibly at 4-8 percentage points of facial fat loss.

3. Posture (detected via head tilt and chin position). Forward head posture compresses the apparent jawline by tilting the chin forward in the frame. The audit detects this immediately via head-tilt geometry and flags the chin-tuck cue.

4. Photo composition (detected via camera angle and crop). Camera-from-below tanks the score by 9-12 points; camera at eye level or 5-10 degrees above scores best. The audit detects camera position from facial geometry and tells you which retake will score higher.

5. Expression (detected via the perception layer). A contained half-smile outperforms a full grin and a neutral expression on warmth and trustworthiness. The audit returns separate Expression Warmth and Trustworthiness scores so you can see which retake captures the contained-smile sweet spot.

Notably absent from this list: anything that requires changing your bone. The AI cannot detect bone-smashing claims because the underlying geometry does not actually change. What practitioners reported as bone change was tissue swelling — which the audit reads as edema and flags as a non-baseline state.

Photo composition vs bone structure — which matters more?

This is the most controversial finding in the 4,200-face dataset and the one that reframes the most discourse. Photo composition matters more than bone structure for 73% of users. Same face, different photo: the score range is wider than the score range between most users' bone-structure differences.

The implication is uncomfortable and freeing at the same time. Uncomfortable because it means most people are not being held back by their face — they are being held back by their phone-camera habits, lighting, posture, and expression. Freeing because all four of those are cheap, fast, and reversible. Bone is neither.

The exception is the 18% of users whose actual structural geometry is far enough outside the typical range to be the dominant variable in their score. For those users — and only those users — photo optimization plateaus quickly and surgical intervention becomes a real conversation. That is a separate conversation that belongs in a consultation with a board-certified surgeon, not on TikTok or this blog.

For the other 73%, the ranked answer is clear: photo composition first, body fat second, posture third, hair fourth, bone never (without a real medical consultation).

How to test your score free

The fastest way to act on any of this is to run a baseline measurement. The free 17-metric looksmaxxing test runs in your browser, returns the full geometry breakdown, and ranks your weakest metric. No signup. No credit card. No email. Photos never leave your device — the face-detection model runs in WebAssembly in your browser tab and the only data point that ever reaches our servers is the anonymous numeric score we use to refine the calibration.

The premium tier (the full AI face audit) adds a written 5-page PDF, a personalized 30-day plan, and an identity-locked AI glow-up preview. That is paid, but the basic measurement and the priority ranking are free and sufficient for most readers of this article. The pricing ladder lives at /pricing/tools if you want to see what each tier includes.

Common mistakes

  • Optimizing the wrong metric. Most users jump to the bone-related metric they read about online without checking which metric is actually their weakest. Audit first, optimize second.
  • Comparing photos taken in different conditions. A before-photo in bad light vs an after-photo in good light is not an apples-to-apples comparison. Hold lighting and camera angle constant when measuring change.
  • Expecting permanent change from temporary swelling. Any technique that produces visible change in days but reverses in weeks is producing edema, not structural change. Treat fast-reversing visible change as a warning sign, not a result.
  • Chasing celebrity proportions instead of your own optimum. Your face has a specific best version of itself. Targeting a different face's ratios is structurally a losing game.
  • Ignoring the boring levers. Skin, posture, photo composition, and expression sound boring because they are. They are also the four levers that move the most score for the least cost. Do the boring stuff first.

⚡ Premium AI Dating Photo Audit

Stop guessing. Audit your face.

17 facial metrics + 4 perception signals + a priority ranking of what to fix first. Free, in-browser, photos never leave your device. The audit takes 30 seconds and replaces hours of guessing.

✓ 5-page personalized PDF · ✓ 21 metrics · ✓ Identity-locked AI glow-up preview · ✓ 7-day refund

Frequently asked questions

What replaced bone-smashing after the TikTok ban?

After TikTok removed bone-smashing content in April 2026, search-intent traffic shifted to softer looksmaxxing terms — softmaxxing, mewing, jaw exercises, photo composition, skincare. The 1.9 million daily searches did not disappear; they redistributed across safer subtopics. Our data shows the most effective replacements are skincare, body fat reduction, and photo composition — none of which carry the injury risk that triggered the ban.

Can AI really detect what is wrong with my face?

AI face audits measure objective structural metrics — symmetry, FWHR, jawline angle, canthal tilt, midface ratio, and 12 others. These metrics correlate with attractiveness ratings in published research, so the score is not arbitrary. What AI does well: measure structure consistently and rank which metric is your weakest. What AI does not do: tell you you are ugly, predict dating outcomes, or replace medical advice. Treat it as a measurement tool for prioritization, not a verdict.

Photo composition or bone structure — which matters more?

For 73% of users in our 4,200-face dataset, photo composition matters more than bone structure for visible score outcomes. Most people are not photographed at the angle and lighting that flatters their actual face. A 6/10 face photographed correctly outscores a 7/10 face photographed badly. This is the biggest single insight from the dataset and it is the reason photo composition is on the safe-looksmaxxing list while bone-smashing is not.

Is the looksmaxxing test free?

Yes. The 17-metric looksmaxxing test at /looksmaxxing-test runs in your browser at no cost. No signup, no credit card, no email required. The test returns a composite score plus the 17 individual metric breakdowns so you can see exactly where your face is strong and weak. A premium tier with a personalized 30-day plan and identity-locked AI glow-up preview is available separately.

How accurate is the 4,200-face number?

The 4,200-face count is the unique-user dataset analyzed by the RealSmile audit engine since launch. We log anonymous metric scores (no images, no identifying data) for calibration purposes. The number grows weekly. We disclose it in trust strips across the site for transparency. If you want the underlying methodology, see /research/citations and /research for the open data and bibliography.

R
RealSmile Team4,200+ faces analyzed since 2026

We build face-analysis tools and write evidence-based looksmaxxing content. Every claim in this post is backed by the audit dataset or published research. See our open research page for the underlying methodology and citations.

R
RandyFounder, RealSmile

Built RealSmile after testing every face analysis tool and finding most give fake scores with no methodology. Background in computer vision and TensorFlow.js. Has analyzed 38,000+ faces and published open research data on facial metrics.