The Evidence Problem in Aesthetic Medicine — Why We Should Read the Small Print
80% of women saw an improvement in fine lines in two weeks
It is the kind of statistic that appears constantly in aesthetic medicine marketing and the kind that deserves considerably more scrutiny than it typically receives.
This post is an in-depth look at the issues raised in our general blog.
A number that tells you almost nothing
Let us begin with that statistic, because it illustrates the problem with unusual precision. Eighty percent of women saw an improvement in fine lines in two weeks.
Consider what this does not tell you.
It does not tell you how many women were in the study.
It does not tell you whether there was a control group.
It does not tell you who assessed the improvement, or whether that assessment was objective or self-reported.
It does not tell you what "improvement" means, or how it was defined, or whether the threshold for recording it was one fine line looking slightly less visible in certain lighting conditions.
And it does not tell you, as your biology should immediately prompt you to ask, how it is possible to demonstrate meaningful collagen improvement in fourteen days, when we know from established science that true new collagen growth unfolds over months and cannot be meaningfully accelerated.
The statistic tells you, with considerable confidence, that the company selling the product wanted you to feel impressed. Beyond that, it tells you very little.
The funding problem
The most significant structural issue in the aesthetic medicine evidence base is not dishonesty. It is a conflict of interest so embedded in how the field operates that it has become largely invisible. The majority of clinical research on aesthetic treatments is funded, directly or indirectly, by the companies that manufacture the products being studied. Industry-sponsored dermatological research is particularly vulnerable to bias, especially in trials assessing subjective outcomes such as tolerability and quality of life.
While sponsorship does not inherently invalidate findings, it necessitates rigorous safeguards such as third-party data analysis and independent replication. Scribd
The problem is not simply that manufacturers fund research into their own products, though that alone is worth noting. It is that they also design the studies, select the endpoints, choose the comparators, and decide which results are submitted for publication.
A study that produces unflattering results is under no obligation to appear in the literature. There is strong evidence that studies reporting positive or significant results are more likely to be published, and that 40 to 62% of studies have at least one primary outcome that was changed, introduced, or omitted between protocol and publication.
This is outcome reporting bias, and it is a well-documented and well-evidenced phenomenon that applies with particular force to a field where the regulatory requirements for pre-market evidence are considerably less demanding than in pharmaceutical medicine. UWA Profiles and Research Repository
The study design problem
Even setting publication bias aside, the quality of study design in aesthetic medicine is frequently insufficient to support the conclusions drawn from it. Small sample sizes are endemic.
Studies of twenty or thirty patients, run for twelve weeks, with subjective self-assessment as the primary outcome measure, are presented as clinical evidence of efficacy without adequate acknowledgement of their limitations.
There are no placebos in injectable treatments or rather, a convincing placebo is extremely difficult to construct which makes blinding genuinely challenging and the risk of expectation bias genuinely significant. Patients who have paid for a treatment, and who believe in it, tend to report improvement. That is not a finding. It is a phenomenon.
Follow-up periods are routinely too short. A treatment claiming to stimulate collagen production over six to twelve months is frequently evaluated at eight or twelve weeks, at the point when the initial inflammatory response or hydration effect is most visible and the question of whether durable structural change has occurred is still entirely open. The timeline of the study, in other words, is often chosen to capture the most favourable moment rather than the most clinically meaningful one.
The language problem
The language in which aesthetic medicine research is reported compounds the design problems considerably. "Significant improvement" in a clinical paper means something very specific; a statistically significant difference between two groups at a defined threshold of probability.
In the hands of a marketing department, it becomes simply "significant improvement," stripped of its statistical context and presented as though it were an unqualified clinical judgement. "Patients reported improvement" conflates the opinion of a treated patient with objective clinical measurement. "Clinically proven" can mean almost anything, since the bar for what constitutes proof in this context is rarely defined and seldom independently verified.
The regulatory framework that governs advertising claims in aesthetic medicine is not without teeth — the Advertising Standards Authority has challenged and upheld complaints against misleading aesthetic marketing — but the volume of content, particularly on social media and in clinic marketing materials, makes meaningful enforcement close to impossible. The claims keep appearing. Most patients, and some practitioners, do not have the framework to challenge them.
What good evidence actually looks like
It is important to be clear that the problem is not the absence of good evidence in aesthetic medicine. There are well-designed, independently conducted, randomised controlled trials in this field, and the evidence base for established treatments, botulinum toxin, hyaluronic acid fillers, certain biostimulators, is considerably stronger than the evidence base for many newer arrivals. The problem is distinguishing credible evidence from promotional material dressed in scientific language, and doing so consistently and without specialist training in clinical trial methodology.
A few markers of credibility are worth knowing.
Who funded the study and is that disclosed clearly?
Was there a control group, and if so what was it?
How was the primary outcome defined and measured, and by whom?
What was the sample size, and was it calculated in advance to have adequate statistical power?
How long was the follow-up period, and does it correspond to the biological timeline of the effect being claimed?
Were the results independently replicated, or does the entire evidence base for this treatment consist of studies conducted by its manufacturer?
These are not arcane questions. They are the basic infrastructure of clinical judgement, and they apply to aesthetic medicine as surely as they apply to any other branch of medicine that has had the good fortune to be more rigorously regulated.
The practitioner's responsibility
Investment decisions should be guided not just by manufacturer claims but by peer-reviewed evidence, patient demographics, and strategic practice positioning. That is true for clinics making decisions about which treatments to offer. It is equally true for practitioners advising individual patients on which treatments to choose.
A practitioner who accepts the manufacturer's evidence summary without engaging critically with its methodology is not fully discharging their clinical responsibility. Neither is one who presents that evidence to patients in terms that imply a certainty the underlying data does not support. PubMed Central
The patients who sit in our consulting rooms deserve honest information about what the evidence for a treatment actually shows — including where it is strong, where it is limited, and where the jury is still very much out. That kind of transparency is not a commercial risk. In our experience, it is the most effective form of trust-building available.
A note of proportion
None of this is an argument for nihilism about the evidence base in aesthetic medicine, or for refusing to use treatments whose evidence does not meet the standard of a pharmaceutical randomised controlled trial.
Many treatments with a modest but genuine evidence base produce real and meaningful clinical benefits for real patients, and the perfect should not be the enemy of the good. What it is an argument for is a consistent and unsentimental willingness to read the small print — to ask who funded the study, what the methodology actually was, and whether the claim being made is supported by the evidence being cited.
That habit of mind is what separates clinical judgement from the acceptance of marketing. It is, in the end, what being a doctor means.
The views expressed in Clinical Perspectives are the author's own and reflect their personal and professional experience in aesthetic medicine.
References
Kang BY et al. Barriers to clinical cosmetic and laser dermatology research in the academic setting by source of funding: a systematic review. Archives of Dermatological Research. 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12126328/
Dwan K et al. Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review. PLOS ONE. 2013. https://pmc.ncbi.nlm.nih.gov/articles/PMC3702538/
Enhanced Tolerability and Improved Outcomes in Acne Management: A Real-World Study of Dermocosmetic Adjunctive Therapy. PMC. 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12309146/
IAPAM. Top Aesthetic Medicine Trends to Watch in 2026. https://iapam.com/2026-aesthetic-medicine-trends