Climate data rescue: scientists revive federal sites after Trump shutdowns

Climate data rescue: scientists revive federal sites after Trump shutdowns

Climate data rescue: scientists revive federal sites after Trump shutdowns

How a shutdown turned into a rescue mission

The federal government pulled the plug on some of the country’s most visible climate websites, and the response was immediate. On May 31, 2025, the administration terminated the entire team behind Climate.gov, the popular NOAA-backed site known for its clear charts, classroom explainers, and updates on warming trends. Soon after, the U.S. Global Change Research Program’s globalchange.gov vanished, taking with it all five versions of the National Climate Assessment—the congressionally mandated reports that distill how a warming planet is reshaping life in the United States.

“They’re public documents. It’s scientific censorship at its worst,” said Peter Gleick, a California water and climate scientist who helped author the first assessment back in 2000. “This is the modern version of book burning.” The reports can still be found through workarounds, he added, but that’s not the point. If the government pulls them from view, ordinary people—teachers, city planners, farmers—are the first to lose access. “This information will be harder and harder for the American public to find,” he warned.

Why does it matter? The National Climate Assessment (NCA) is the U.S. government’s climate baseline. It translates peer‑reviewed science into risks and projections by region and sector, from wildfire and drought in the West to flooding and extreme heat in the South and Midwest. States use it to plan roads and drainage. Hospitals use it to map heat risks. Insurers and utilities use it for long‑term decisions. Without globalchange.gov, the official gateway is dark, and that complicates everything from grant applications to school curricula.

Climate.gov filled a different but crucial role: communication. It connected the dots between data and daily life—El Niño, coral bleaching, hurricane probabilities, sea‑level rise. “We operated exactly how you would want an independent, non‑partisan communications group to operate,” said Rebecca Lindsey, the site’s former program manager. It wasn’t advocacy. It was translation: what the data shows, and what it means.

Shutting down public science websites doesn’t delete the science itself. The data and reports are created by networks of agencies, universities, and labs. But access is power. The decision to remove the federal front doors to that information—especially reports Congress requires—reverberated across agencies and classrooms. Teachers reported broken lesson plans. City resilience offices scrambled for archived links. Researchers warned about “time series breaks,” when gaps in continuous data make long‑term trends harder to analyze.

The administration has defended the removals as part of what it calls a push to “restore a gold standard for science” and move away from “ideological activism.” To scientists and librarians now racing to save content, that framing rings hollow. They see an attack on public access to vetted climate science at the exact moment communities are asking for better flood maps, heat alerts, and wildfire forecasts.

So the rescue began.

Within days, a coalition of librarians, coders, scientists, and advocates started mirroring sites, scraping datasets, and documenting what disappeared. The Harvard Law School Library stood up the Data.gov Archive. Harvard’s T.H. Chan School mirrored public health records that inform heat and air‑quality guidance. The law library’s Innovation Lab preserved 311,000 datasets copied between 2024 and 2025—an insurance policy against vanishing links and orphaned files.

The Data Rescue Project became a clearinghouse, directing volunteers and institutions to priority targets—federal pages at risk of being changed or removed. Long‑standing open‑government groups, including Free Government Information and the Preservation of Electronic Government Information (PEGI) alliance, organized urgent campaigns to copy, catalog, and store federal records before they slipped offline.

These aren’t simple screenshots. Teams use web crawlers to capture full sites, including PDFs, images, and embedded files. They generate checksums to verify integrity, tag metadata for search, and snapshot “dynamic” pages that change over time. When a link breaks, they try to reconstruct context—what was here, when, and what version is authoritative. The goal isn’t just saving files; it’s preserving provenance so researchers can trust what they’re seeing a year—or ten years—down the line.

Public health leaders are sounding alarms, too. George Benjamin, who heads the American Public Health Association, said the removals could impair tracking of infectious diseases like HIV and Mpox. Even if some pages are restored, he warned, halted data collection means lost continuity—and lost continuity makes it harder to detect outbreaks, evaluate interventions, or understand environmental drivers like extreme heat and smoke. The damage isn’t only what disappears today; it’s what never gets recorded tomorrow.

Courts have started to weigh in. Doctors for America sued to restore health information and won an early victory. On February 11, 2025, a federal judge issued a restraining order requiring restoration of certain Health and Human Services, CDC, and FDA websites—an emergency step that underscored how fast essential information can vanish. Additional lawsuits have been filed by the American Federation of Teachers, Minority Veterans of America, and the Public Citizen Litigation Group, testing whether sweeping removals of public information run afoul of federal law.

The legal stakes are bigger than any one website. Advocates argue that deleting congressionally mandated assessments and large swaths of federal records could clash with statutes like the Federal Records Act and obligations for agencies to provide accurate, accessible information. They also point to administrative law: abrupt policy shifts without public process or explanation can be vulnerable in court. The administration’s counter‑argument is straightforward: agencies set their own communications strategy and are trimming what they see as politicized content. The courts will now decide where those lines are.

While the legal fight plays out, archivists and volunteers are trying to build something that looks like resilience. The End of Term Web Archive—an interagency effort that sweeps federal domains at pivotal moments—has become a lifeline, preserving snapshots of websites across administrations. The Internet Archive’s Wayback Machine, a workhorse of online memory, gives the public and researchers access to what used to be on a page—even when the original host has been shut down.

This is more than nostalgia. Communities use these records to plan for extreme heat that strains hospitals, floods that overwhelm storm drains, and wildfires that erase entire neighborhoods. Builders look up sea‑level scenarios before elevating a substation. School districts teach climate literacy using federal visuals and maps. Strip away the authoritative source, and people turn to whatever pops up in a search bar. That’s a recipe for confusion.

Who’s saving what—and what’s next

Behind the scenes, the preservation push looks like a relay race. Different groups grab different batons.

  • Universities are hosting mirrors, from the Harvard Law School Library’s Data.gov Archive to public health repositories at the Chan School.
  • Librarians are cataloging datasets and reports, adding metadata so a teacher in Phoenix can find a heat index chart as easily as a planner in Miami can find a sea‑level map.
  • Open‑government coalitions like PEGI and Free Government Information are coordinating volunteers and creating checklists so teams don’t duplicate work or miss high‑risk pages.
  • Technical volunteers are running crawlers, validating file integrity, and preserving “living” pages that update frequently, such as dashboards and interactive maps.

Former federal staffers are also keeping track of what changed and when. That matters for accountability. If the language around wildfire risks in the Southeast was altered, or an urban heat projection removed, someone needs to document it. Provenance—who wrote it, how it was reviewed, and which version is official—can make or break a court case or a policy decision.

On the climate side, much of the attention centers on the NCAs. Those reports didn’t just summarize science; they set reference points used in grant criteria, federal planning guidance, and infrastructure standards. When an engineer designs a bridge to handle projected rainfall in 2050, they often cite NCA tables or figures. When those references go offline, the paperwork snarls—and the work slows.

Meanwhile, K‑12 and college instructors are improvising. Many had lesson plans tied to Climate.gov’s explainers and graphics—El Niño primers, Arctic sea‑ice charts, ocean heat content visuals. Without the official host, teachers are leaning on archives, but that raises practical questions: is the file the latest version? Is there updated context? Does the archived page reflect the best current science? Librarians can help, but it’s a heavy lift at scale.

The health stakes are just as concrete. Federal websites house heat‑health guidance, air‑quality alerts, and pathogen surveillance dashboards. Those feeds inform local health departments and hospital systems. Interruptions ripple outward: fewer alerts, slower response, muddier guidance for vulnerable groups like older adults, outdoor workers, and people with chronic conditions.

Expect more court filings and more stopgap fixes. The February restraining order that touched HHS, CDC, and FDA shows judges are willing to act quickly when core public health information disappears. Climate sites may face a similar trajectory—emergency motions, targeted restorations, and long arguments about statutory duties and agency discretion.

Even if some websites are restored, continuity is the bigger battle. Daily, weekly, and monthly updates create the backbone of long‑term climate indicators. If those updates stop for months, researchers are left with a statistical pothole. You can try to fill it, but the uncertainty grows. That’s why preservation groups are urging agencies not only to flip the switches back on but to resume the routine uploads that keep trend lines intact.

There’s also the question of trust. Government sites carry weight because they signal a process—peer review, interagency vetting, public documentation. Mirrors and archives can keep information available, but they don’t replace the role of an official source. That’s why many archivists describe their work as a firewall, not a fix. The long‑term solution, they say, is a stable federal commitment to keep congressionally required reports and datasets online, regardless of political winds.

For people who need information now, there are practical steps. Researchers and reporters are leaning on the End of Term Web Archive and the Wayback Machine to retrieve pages as they appeared before removals. The Data.gov Archive is helping users locate federal datasets that no longer have active landing pages. Advocacy groups are compiling “where to find it now” guides that point to mirrors and institutional repositories. It’s a patchwork, but it beats starting from scratch.

There’s no sugarcoating the scope. The Innovation Lab’s figure—311,000 datasets preserved between 2024 and 2025—hints at the volume of information at risk when federal domains are pruned or shuttered. And that’s just what’s been captured. Countless files live behind interfaces, APIs, and dashboards that are harder to archive faithfully. Once those are gone, recreating them isn’t just a matter of finding a PDF; it can mean rebuilding an entire data pipeline.

Fueling this effort is a deeper argument about who owns public science. Taxpayer‑funded data underpins everything from crop insurance to disaster aid. The climate assessments were ordered by Congress to ensure a common understanding of risk. When those touchstones vanish from official view, the debate shifts from policy to access itself. The response—a fast‑moving alliance of librarians, scientists, technologists, and public‑interest lawyers—suggests that access is a line many are unwilling to see crossed.

No one expects a single, neat resolution. Some sites will likely reappear under court order. Others may return with new language or structure, prompting fresh fights over what counts as neutral communication. Parallel to that, universities and nonprofits will keep building independent mirrors and catalogs, hedging against future swings. It’s the new normal: public records infrastructure with backups outside the government.

For now, the work is urgent and unglamorous. It’s checksum logs and metadata fields. It’s late‑night scrapes of maps and dashboards. It’s phone calls with former agency staff to understand how a dataset was compiled. It’s teachers updating lesson links, city planners rewriting citations, and health departments double‑checking guidance. The stakes aren’t abstract. They’re measured in flood insurance rates, heat‑stroke ER visits, and wildfire evacuation maps.

Strip away the politics and it’s simple: public science only matters if the public can find it. That’s why this rescue is less about winning a narrative and more about keeping the lights on. Whether the official sites return tomorrow or next year, the work of preserving and sharing climate data has already become a community project. And it’s not slowing down.

Write a comment

Required fields are marked *