Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's Active Baby Boomers

Newsletter
New

The Government Can’t Ensure Artificial Intelligence Is Safe. This Man Says He Can.

Card image cap


Artificial intelligence promises to revolutionize health care by predicting illnesses, speeding diagnoses, selecting effective treatments and lightening doctors’ administrative loads — but only if doctors trust that it won’t harm their patients.

The government is struggling with oversight of this rapidly evolving technology. But Dr. Brian Anderson, whose experience working long hours as a family doctor for low-income immigrants in Massachusetts inspired him to work on technology to make caring for patients easier, says he can.

Anderson’s Coalition for Health AI, an alliance of tech giants and major hospital systems, plans to launch quality assurance labs to vet AI tools in 2025 that would effectively entrust the private sector with vetting the technology in the absence of government action.

Biden administration officials have signaled support for the idea. The administration’s top health tech official, who previously served on CHAI’s board, endorsed the concept at a POLITICO event in September. Nearly three thousand industry partners have joined the effort, including the Mayo Clinic, Duke Health, Microsoft, Amazon and Google. Anderson, who went on to become a consultant to federal regulators on health tech after his time as a family doctor, is now trying to convince President-elect Donald Trump that the health AI industry should oversee itself.

Anderson calls the current regulatory gaps “a wonderful opportunity for industry to lead a bottoms-up effort.”

If Trump’s team endorses Anderson’s effort, it could effectively establish the trajectory of AI regulation in health care going forward, ceding the principal role to the private sector. Critics worry it could advantage major companies and health systems over startups. And some doctors and patient advocates worry Anderson’s labs won’t actually ensure safety.

The critics say the certification process Anderson envisions isn’t likely to pick up on how AI could steer doctors wrong while at the same time leading hospitals and physicians to more rapidly adopt tools that risk patient safety. They’re concerned that CHAI will put industry interests ahead of patients. They would like to see the government do more.

But Dr. Robert Califf, who leads the government’s regulation of health software at the Food and Drug Administration, says his agency cannot monitor all advanced AI tools without a doubling of his staff, a hard sell in Congress. His agency leaves many new AI tools, such as chatbots, entirely unregulated.


2148308455

Anderson is offering an answer and getting a welcome reception. Within a year of launching CHAI, he had recruited hundreds of industry members and convinced Califf deputy Troy Tazbaz and President Joe Biden’s coordinator for health technology, Micky Tripathi, to serve as non-voting members of CHAI’s board.

Anderson has designed CHAI’s initiatives to complement federal rules Tripathi’s office at the Department of Health and Human Services finalized in 2023. Those require AI developers to provide more information about how their tools work. CHAI offers “model cards,” a sort of nutrition label, that will help companies fulfill the requirement.

But convincing Trump is not a given and without his buy-in the whole plan is under threat.

Trump has pledged to rescind Biden’s 2023 executive order on AI, which tasked agencies with developing safety measures. Trump’s campaign platform derided “Radical Leftwing ideas on the development of this technology” while promising “AI Development rooted in Free Speech and Human Flourishing.”

On the other hand, Trump takes advice from billionaire AI pioneer Elon Musk, who sees the technology’s potential for harm and backed a California bill to regulate it in 2024. If Musk has clout, Trump could prefer stricter government oversight.

Anderson touted his plan on Capitol Hill in November to a bipartisan crowd. Technologists in Trump’s orbit, including Kev Coleman of the Paragon Health Institute, a Washington think tank with deep ties to the president-elect, and Eric Hargan, who was deputy secretary at HHS during Trump’s first term, were there. So were key Democrats, including outgoing Senate Majority Leader Chuck Schumer, who convened a bipartisan AI task force in 2023, and Tripathi, who’s now HHS’ chief AI officer.

Coleman has issued two reports on AI seeking to influence Trump’s thinking. He says FDA should play the main role and that it should get the highly trained staff Califf says it needs. At the same time, Coleman warns against both ill-considered government regulation that could stymie the technology’s growth and outsourcing AI oversight to self-interested AI developers in the private sector.

Allowing bureaucrats easily swayed by “emotionally compelling and sensationalistic claims” about AI could reduce the potential to use the tech to reduce health care costs, Coleman wrote. At the same time, handing oversight to the private sector could create a conflict of interest since “many institutions qualified to evaluate medical AI are themselves AI developers.”

Leading House Republicans share that worry.

Rep. Jay Obernolte (R-Calif.), chair of a bipartisan House task force considering Congress’ role in promoting and regulating AI, has written to top Biden officials three times warning that Anderson’s assurance labs could disadvantage startups.

Rep. Brett Guthrie (R-Ky.), who’s the incoming chair of the House panel in charge of health policy, co-signed the letters.

“Regulatory capture will only drive consolidation and lead to costlier health care for patients across the country,” Obernolte and Guthrie wrote.

Undeterred, Anderson met in November with Obernolte and other key Republicans, such as Sen. Mike Rounds (R-S.D.), who served on Schumer’s AI task force, and Mike Crapo (R-Idaho), who’s set to lead a Senate committee with broad powers over health care in 2025.

CHAI plans to finish certifying its first assurance labs early in 2025.

“AI is moving incredibly fast,” Anderson said. “We need to develop these frameworks at the pace of this kind of innovation.”

CHAI is born

Anderson has become a fixture in the halls of HHS through his work at MITRE, which runs federal research and development centers. For decades, the nonprofit, tucked away in the Washington suburbs, has advised government agencies.

“We are required by law to not have commercial entanglements,” Anderson said, referencing the group’s original charter as a Pentagon-backed think tank.

Because of its special status, MITRE employees are often in the room at the earliest stages of policy development and the nonprofit frequently partners with private-sector entities to help solve public problems. As the organization’s former chief digital health physician, Anderson organized private-sector collaborations during the pandemic.

In 2020, he helmed the Covid-19 Health Care Coalition, a group of tech and pharmaceutical companies and health systems that streamlined efforts to collect and distribute donated plasma to treat patients with severe cases of the virus. He also built the Vaccine Credential Initiative, which developed a digital tool so people could prove they were vaccinated. Both groups included large companies like Microsoft and Amazon and major academic medical centers like Mayo Clinic and Boston Children’s Hospital.

In the pandemic’s waning days when the processing power needed to fuel AI was growing exponentially, he began thinking about how AI could transform disease surveillance and drug development.

“This could be used to save real lives,” he said. “That was the motivation.”

Anderson reached out to his mentor Dr. John Halamka, head of Mayo Clinic’s technology innovation center, to gather a team of academics and digital companies to establish AI guidelines in medicine. He also invited government officials, including HHS’ Tripathi, to join their meetings. The Coalition for Health AI was born.

When the White House released its template for an AI Bill of Rights in fall 2022, health systems and tech companies reached out to CHAI, Anderson said: “There were clear articulations about things like fairness, independent evaluations of models — those things were clearly called out.”

By the end of 2022, CHAI had published a blueprint for trustworthy AI, which included so-called model cards — essentially nutrition labels for AI products — and assurance labs to vet AI tools.

A year later, the coalition had 1,500 member organizations, which made it difficult for Anderson to manage while fulfilling his other duties at MITRE. Though he had hoped that MITRE could operate CHAI, the organization declined, citing potential conflicts with its mandate to represent the government’s best interests.

In March, Anderson left MITRE to become CEO of CHAI.

The nonprofit’s annual membership fees range from $5,000 for small companies to $250,000 for larger firms. CHAI doesn’t lobby or set standards, said Anderson, but policymakers have shown interest in its work.

“One of my fears, one of the fears, I think, of many folks within CHAI, is that the development, the balloting, the approval — the normative process of creating a standard can take a long time,” he said. “If we rely on that process alone to develop some of these guidelines or guardrails within the innovation community, they won’t be able to keep up with the pace of innovation and AI.”

In 2024, CHAI published its model card as a way for tech developers to inform clients of their algorithms’ strengths and weaknesses. The cards are meant to complement a new HHS transparency rule slated to take effect at the end of 2024 requiring HHS-certified electronic health record companies to disclose 31 attributes of their decision-support tools to buyers.

Access to CHAI’s model card is free for new members and can be licensed by others. Anderson said CHAI is making available a free, open-source version on the tech developer platform GitHub.

The organization is also certifying seven assurance labs that will evaluate algorithms. In June, Anderson said about 30 organizations had expressed interest in starting labs.

Anderson wants to establish a national registry where all model card information will live. He has sought government funding from the federal innovation center he once advised, the Advanced Research Projects Agency for Health, to develop AI evaluation tools.

Washington influence

CHAI’s growing influence has attracted members who value the organization’s perceived sway with regulators, something MITRE’s early involvement helped solidify.

Not only are federal regulators members of CHAI working groups, but officials like Tripathi and Melanie Fontes Rainer, head of the HHS Office for Civil Rights, regularly appear at CHAI events as featured guests. Regulators, including Califf and HHS Deputy Secretary Andrea Palm, have endorsed assurance labs as supplements to federal oversight.

In backing the concept of assurance labs at a POLITICO event in September, Tripathi stopped short of endorsing CHAI and said HHS is monitoring several AI initiatives.

Both Tripathi and Tazbaz resigned from their nonvoting federal liaison roles on CHAI’s board in 2024 to avoid potential “collisions” between members’ interests and HHS policymaking. Still, they both feature prominently in a photo on CHAI’s website titled “Our Purpose.”



And regulators continue to praise CHAI. In November, a Tripathi deputy, Jeff Smith, touted CHAI’s model card for compliance with the new transparency rule.

But the incoming Trump administration with its new agency officials could pose a problem for Anderson.

Some Republican legislators are concerned about Biden administration regulators’ relationships with CHAI. In June, four House lawmakers, Obernolte, Guthrie, Mariannette Miller-Meeks (R-Iowa) and Dan Crenshaw (R-Texas), wrote to Jeff Shuren, then the head of the FDA Center for Devices and Radiological Health, to oppose it.

“While we are ardent supporters of the use of third-party expertise for regulatory review, CHAI comprises legacy tech companies like Microsoft and Google in addition to large health care systems, which all have AI incubator businesses. Their inclusion presents a clear conflict of interest,” they wrote.

The group sent another letter to Tripathi in November, requesting clarity on how HHS would prevent assurance labs run by technology developers from controlling market access.

Anderson said he’s taking steps to assuage their concerns by setting operating rules for the labs.

“It might be that the final certification framework is not just disclosure [of a conflict of interest], but you cannot certify or you cannot evaluate a model with a CHAI report card where you have a commercial opportunity to benefit,” he said.

Assurance labs

CHAI’s broad membership masks an internal debate over how labs will function and whether they will provide the reassurance health systems need to adopt AI tools.

At an October CHAI meeting in Las Vegas, members questioned whether assurance lab certification would increase the cost of AI products and adversely effect hospital budgets.

Some questioned whether a one-time certification for assurance labs was the most effective way to assess potential for patient harm, given different technological set-ups and workflows among health institutions and AI’s tendency to degrade.

“Our view of AI assurance is it’s got to be context-driven, meaning: I’m assuring an AI for a particular purpose, used by a set of people,” said Doug Robbins, vice president of engineering and prototyping at MITRE Labs.

MITRE and UMass Chan Medical School are launching an assurance lab, separate from CHAI, that will focus on ensuring that AI can work in a specific hospital setting. Meanwhile, CHAI is certifying assurance labs for Mayo Clinic Platform, UMass Memorial Medical Center, and five startups. The startups will assess and verify the algorithms’ basic functions.

Nigam Shah, chief data scientist for Stanford Health Care and a CHAI board member who wrote an influential paper on assurance labs, believes the labs must evolve into tools that can be rapidly deployed to vet AI inside of health systems.

“In the beginning, it’s a place,” he said. “After a while, the lab starts releasing parts of what they do as software.”

That approach should appeal to Paragon Health Institute’s Coleman, who contends that working with assurance labs should be voluntary and agrees that using software is the way forward.

“This is actually an opportunity for AI to police themselves,” he said.

Even without consensus on how AI should be validated, Anderson is pushing ahead with his plan while appealing to Washington. In December, he spoke at HHS’ annual Assistant Secretary for Technology Policy Conference.

Who will ultimately benefit from the assurance lab model remains uncertain and sources close to CHAI, Microsoft, Amazon and Google say the companies aren’t sure the model card concept works. Still, while some of CHAI’s big tech members are not among the first to launch assurance labs, some are developing competing platforms. In 2024, Duke Health and Avanade, a joint venture between Microsoft and Accenture, announced a platform to help health systems account for all the AI they use. Eventually, it is supposed to evaluate and monitor AI performance.

Anderson believes the market will determine how AI is best vetted, by assurance labs or otherwise: “If the value that we’re trying to drive, in this case, is creating trust and transparency for safe and effective AI tools, I’m OK with there being winners and losers.”


Recent