As a cybersecurity reporter at ProPublica, a lot of my work over the previous two years has targeted on how the federal authorities and its IT contractors, like Microsoft, have navigated main technological transitions. The one now within the information every single day is synthetic intelligence.
This rising know-how has its grip on everybody: Dwelling customers, companies and the federal authorities are all speeding to make use of it. President Donald Trump and his Cupboard say AI will remodel the nation, making us extra affluent, environment friendly and safe — if solely we are able to undertake it quick sufficient.
However this messaging isn’t new. President Barack Obama’s administration used practically equivalent language a decade and a half in the past because the U.S. barreled into the technological revolution of cloud computing.
I’ve studied how the federal authorities has dealt with — and mishandled — this transition over the previous twenty years, and my reporting affords some cautionary tales and priceless classes as policymakers encourage using AI and federal businesses undertake the know-how.
Lesson 1: There’s no such factor as a free lunch
Then: Within the early 2020s, a sequence of cyberattacks linked to Russia, China and Iran left the federal authorities reeling. The Biden administration referred to as on main tech firms to assist the U.S. bolster its defenses. In response, Microsoft CEO Satya Nadella pledged to present the federal government $150 million in technical providers to assist improve its digital safety. It additionally supplied a “free” safety improve for presidency prospects.
Now: Final yr, the Trump administration introduced a raft of agreements with tech firms that had been meant to assist federal businesses “buy enterprise AI instruments at government-friendly pricing.” Businesses may use OpenAI’s ChatGPT for $1. Google’s Gemini for 47 cents. Grok by xAI for 42 cents. The administration hoped that the low-cost pricing would make it “simpler for federal groups to accumulate highly effective AI capabilities … to boost mission supply and operational effectivity.”
The takeaway: Be cautious of freebies. Our investigation into Microsoft’s seemingly easy dedication revealed a extra advanced, profit-driven agenda. After putting in the upgrades, federal prospects can be successfully locked in, as a result of shifting to a competitor after the free trial can be cumbersome and dear. At that time, the shopper would have little selection however to pay for the upper subscription charges. The plan labored: One former Microsoft salesperson instructed me “it was profitable past what any of us may have imagined.” In response to questions concerning the dedication, Microsoft has stated its “sole aim throughout this era was to assist an pressing request by the Administration to boost the safety posture of federal businesses who had been repeatedly being focused by subtle nation-state menace actors.”
Businesses seeking to purchase AI instruments at discounted charges in the present day should think about how the prices would possibly balloon down the highway. The Common Companies Administration warns that AI “utilization prices can develop rapidly with out correct monitoring and administration controls” and advises businesses to “set utilization limits and recurrently overview consumption studies.”
Lesson 2: Oversight applications are solely as efficient as their sources
Then: Within the Obama period, the federal authorities shifted its delicate info and computing must information facilities owned and operated by personal firms. Acknowledging the potential dangers, the administration created the Federal Threat and Authorization Administration Program, or FedRAMP, in 2011 to assist make sure the safety of the cloud computing providers that it was encouraging U.S. businesses to make use of.
However in my latest investigation of this system, I discovered it was no match for Microsoft, which successfully wore down the FedRAMP crew over 5 years as the corporate sought this system’s seal of approval for a serious cloud providing often called GCC Excessive. Regardless of severe reservations about its cybersecurity, FedRAMP in the end licensed the product, partially as a result of it lacked the sources to maintain going. In response to questions, Microsoft instructed me: “We stand by our merchandise and the great steps we’ve taken to make sure all FedRAMP-authorized merchandise meet the safety and compliance necessities crucial.”
Now: Right this moment, this tiny outpost inside the Common Companies Administration has even fewer sources to supervise the cloud know-how on which the federal government depends — together with AI. FedRAMP says it now operates “with an absolute minimal of assist employees” and “restricted customer support.” This system was an early goal of the Trump administration’s Division of Authorities Effectivity.
The takeaway: FedRAMP, which a 2024 White Home memo stated “should be an knowledgeable program that may analyze and validate the safety claims” of cloud suppliers, is now little greater than a rubber stamp for the tech business, former workers instructed me. As federal businesses undertake AI instruments that draw upon reams of delicate info, the implications of this downsizing for federal cybersecurity are far-reaching. A GSA spokesperson defended this system and stated FedRAMP now “operates with strengthened oversight and accountability mechanisms.”
Lesson 3: “Impartial” critiques are solely so unbiased
Then: The federal government has lengthy relied on so-called third-party assessors to confirm the safety claims made by cloud service suppliers like Microsoft and Google. In principle, these companies are speculated to be unbiased consultants that provide a suggestion to FedRAMP on whether or not a product meets federal requirements. However in follow, their independence has an asterisk: They’re paid by the businesses they’re evaluating.
My latest investigation discovered that this setup creates an inherent battle of curiosity. Within the case of Microsoft’s GCC Excessive, two assessors advisable the product regardless of being unable to totally vet it, in line with a former FedRAMP reviewer. A kind of companies didn’t reply to my questions and the opposite denied this account.
FedRAMP, we discovered, is effectively conscious of how the monetary association between the cloud firms and their assessors can distort official findings about cybersecurity issues. This system even created a “again channel” to encourage assessors to share considerations they won’t in any other case increase of their official studies for worry of angering their tech shoppers and shedding enterprise.
Now: With FedRAMP lowered to being a “paper pusher,” as one former GSA official put it, these third-party evaluation companies have taken on much more significance within the vetting course of. In response to questions from ProPublica, the GSA stated that FedRAMP’s system “doesn’t create an inherent battle of curiosity for skilled auditors who meet moral and contractual efficiency expectations.” It didn’t reply to questions on this system’s again channel.
The takeaway: The pendulum has basically swung again to the pre-FedRAMP period, when every federal company was individually chargeable for vetting the merchandise it used. The GSA instructed me that FedRAMP’s job is “to make sure businesses have enough info to make these threat selections.” The issue is that businesses typically lack the employees and sources to do thorough critiques, which implies the entire system is leaning on the claims of the cloud firms and the assessments of the third-party companies they pay to judge them.

