MEDVi: $400M in Revenue, Two Employees, and the Healthcare Governance Vacuum

TV
Thiago Victorino
10 min read
MEDVi: $400M in Revenue, Two Employees, and the Healthcare Governance Vacuum
Listen to this article

A company founded in September 2024 with roughly $20,000 in capital generated $400 million in revenue within its first year. It projects $1.8 billion for year two. It has two employees: the founder and his brother. It operates in healthcare.

Those numbers should provoke more alarm than admiration.

MEDVi sells GLP-1 weight loss prescriptions through a telehealth storefront. Kyle Harrison, a general partner at Contrary and former investor at Index Ventures, Coatue, and TCV, published a detailed examination of the company in April 2026. His conclusion is blunt: “MEDVi is not, in any meaningful sense, a healthcare company.”

The AI is not the business. The AI is, in Harrison’s phrase, “the wrapping paper, not the gift.” The actual business is regulatory arbitrage: exploiting COVID-era telehealth deregulation, renting clinical infrastructure from third parties, and riding GLP-1 demand to extraordinary revenue on a near-zero cost base.

The Infrastructure Behind the Storefront

Harrison’s most useful contribution is pulling apart MEDVi’s supply chain. The company does not employ physicians. It does not operate clinics. It contracts with CareValidate for provider credentialing and OpenLoop Health for the telehealth network. These two companies, with a combined workforce of approximately 700 employees, provide the clinical infrastructure that MEDVi’s two-person operation could never build.

“The storefront isn’t the business,” Harrison writes. “The supply chain is the business.”

This is the structure that produced $400 million in revenue at a 16.2% net margin. For comparison, Hims & Hers, a company with 2,400 employees and actual clinical operations, generates $2.4 billion in GLP-1 revenue at a 5.5% margin. MEDVi’s margin advantage comes from the absence of the infrastructure that Hims & Hers maintains. The question is what that absent infrastructure was supposed to do.

What the Absent Infrastructure Was Supposed to Prevent

Harrison’s investigation documents a series of failures that answer that question precisely.

Fabricated physician profiles. MEDVi’s website featured provider profiles with names like “Dr. Tuckr Carlzyn” accompanied by stock photos. These are not real physicians. They are fictional identities presented to patients as their healthcare providers. In any traditional healthcare setting, fabricating provider credentials would trigger immediate regulatory action. MEDVi operated this way at scale.

Non-functional clinical intake. Harrison tested MEDVi’s intake process and found it accepted a target weight of 60 pounds and a birthday of February 31. Neither is medically or calendrically possible. The intake form exists to simulate clinical diligence. It does not perform it.

AI-generated testimonials. The company used tools like Midjourney and Runway to create fake before-and-after photos for marketing. Patient testimonials with fabricated imagery are not marketing shortcuts. In healthcare, they are fraud.

Data breach at the infrastructure layer. In January 2026, OpenLoop Health suffered a breach exposing 1.6 million patient records. MEDVi’s patients were among them. When your clinical infrastructure is rented, your patients’ data security is rented too.

The FDA has issued a warning letter. A class action lawsuit has been filed in Delaware. The regulatory and legal systems are responding. But they are responding after $400 million in revenue, after 250,000 customers, after 1.6 million records were exposed.

The AI Made This Possible (But Not in the Way You Think)

The technology press covered MEDVi as an AI success story. Harrison’s analysis reveals something more instructive. AI did not perform the clinical work. AI performed the scaling work. It generated marketing content, created fake testimonials, built a consumer-facing storefront that looked legitimate, and automated an intake process that functioned as theater rather than triage.

This is the pattern we identified in Vertical AI and the Governance Gap in Professional Services: when AI performs professional work, professional standards must apply to the machines doing that work. MEDVi extends the pattern. Even when AI merely wraps professional services in a consumer interface, it can strip away the governance layer that professional infrastructure was built to provide.

The parallel to legal AI is direct. As we examined in Harvey: 25,000 Agents, 100,000 Lawyers, Zero Governance Standards, Harvey deploys AI agents across 1,300 legal organizations with no industry-wide standard for verification, liability, or professional oversight. MEDVi did the same in healthcare, except with fabricated providers instead of real ones. Harvey at least has 100,000 actual lawyers on the platform. MEDVi had stock photos.

The severity differs. The structural problem is identical. Professional services require professional governance. AI makes it possible to deliver the appearance of professional services without the substance. That possibility is the governance vacuum.

Telehealth Deregulation Created the Opening

MEDVi did not emerge from nothing. It emerged from a specific regulatory environment.

During COVID, federal and state regulators loosened telehealth restrictions to maintain healthcare access. Prescribing controlled substances via video. Interstate practice without state-by-state licensure. Reduced in-person requirements. These changes served a genuine public health need during the pandemic. They also created the regulatory opening that MEDVi exploited.

Harrison frames this as the collision of three forces: COVID-era telehealth deregulation, the availability of rented clinical infrastructure through companies like OpenLoop Health, and explosive consumer demand for GLP-1 medications. Each force alone is manageable. Combined, they created conditions where a two-person company could practice what amounts to healthcare without any of the infrastructure, oversight, or accountability that healthcare requires.

The technology industry’s favorite mantra applies here with lethal precision. “‘Move fast and break things’ just became ‘move fast and break laws,’” Harrison writes.

What Healthcare Governance Must Address

The standard response to cases like MEDVi is to call for more regulation. That framing misses the point. The regulations exist. The FDA has authority. State medical boards have authority. HIPAA governs data protection. The failure is not an absence of rules. It is a failure of the rules to account for the speed and scale at which AI-wrapped services can reach patients.

As we explored in The Autopilot Reckoning, the transition from tools to autonomous services creates liability questions that existing frameworks cannot answer. MEDVi is healthcare’s version of that transition. The company did not build AI that practices medicine. It built AI that sells medicine, and outsourced the practicing to a rented infrastructure layer that buckled under the weight.

Three governance failures need addressing.

Infrastructure-layer accountability. When a healthcare company outsources its entire clinical operation to third parties, who governs the quality of care? MEDVi can claim it is a technology platform. OpenLoop Health can claim it provides the network but not the prescriptions. CareValidate can claim it credentials providers but does not supervise them. Each entity points to the others. The patient has no clear accountable party. Regulators must close this loop. If you collect revenue from patients for healthcare services, you bear governance obligations for those services, regardless of how many layers of infrastructure sit between you and the patient.

Intake verification standards for AI-mediated prescribing. A system that accepts February 31 as a birthday is not conducting clinical intake. It is conducting conversion optimization. When AI mediates the interaction between patient and prescriber, minimum standards must exist for what that intake actually verifies. Not just identity, but clinical appropriateness. Not just form completion, but medical judgment.

Speed-of-deployment oversight. Traditional healthcare governance assumes deployment timelines measured in months or years. Building a clinic takes time. Hiring physicians takes time. The accreditation process takes time. MEDVi reached 250,000 patients in under a year with two employees. The governance mechanisms that depend on friction (slow buildout, staffing requirements, physical inspections) are irrelevant when the deployment is digital, rented, and AI-accelerated. New mechanisms must account for the velocity.

The Rorschach Test

Harrison titled his analysis “The $1B Rorschach Test” because how you see MEDVi reveals what you believe about technology, healthcare, and governance.

If you see a scrappy founder who exploited a market opportunity, you are looking at the revenue and ignoring the patients. If you see a healthcare company, you are looking at the storefront and ignoring the supply chain. If you see an AI company, you are looking at the wrapping paper and ignoring what is (or is not) inside the box.

What MEDVi actually reveals is simpler and more uncomfortable. Professional services governance assumes that the entities delivering professional work have professional infrastructure. AI dissolves that assumption. It makes it possible to build a $400 million healthcare business with the infrastructure of a lemonade stand.

The governance question is not whether MEDVi broke the rules. The FDA and the courts will determine that. The governance question is whether the rules are built to detect and prevent the next MEDVi before it reaches 250,000 patients. Right now, the honest answer is no.


This analysis synthesizes Kyle Harrison’s The $1B Rorschach Test (April 2026).

Victorino Group helps healthcare and regulated-industry organizations build governance infrastructure before the next incident forces the question. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation