Within the early days of Amazon Net Companies, technical evangelist Jeff Barr was placing in lengthy hours on the street, pitching a novel idea: hire computing energy for 10 cents an hour, and storage for 15 cents a gigabyte per thirty days — no servers to purchase, no knowledge facilities to construct.
Barr remembers calling his spouse to examine in on the finish of the day. Get a pleasant dinner, she instructed him, you deserve it. However later, on the restaurant, trying on the menu and doing the maths in his head, he couldn’t assist however ask himself if the pennies have been including up.
“Did sufficient folks begin utilizing these servers to purchase me an honest steak?” he questioned.
He most likely ought to have ordered the filet.
Twenty years later, AWS generates practically $129 billion a 12 months in income. That’s sufficient to rank within the prime 40 of the Fortune 500 if it have been a standalone firm, forward of the likes of Comcast, AT&T, Tesla, Disney, and PepsiCo. Corporations comparable to Netflix, Airbnb, Slack, Stripe and hundreds extra have constructed large companies on its platform.
When AWS goes down, it ripples throughout the online, taking down apps, web sites, and providers that almost all customers by no means knew have been on a standard infrastructure.
However the enterprise that outlined cloud computing — bankrolling Amazon’s growth into all the things from streaming to same-day supply — is now grappling with essentially the most vital problem because it launched. The rise of AI has upended the trade, empowering Microsoft, Google and others, and creating aggressive dynamics that appear to vary each month.
For the primary time, AWS faces questions on its long-term capability to guide the promote it created.
With Amazon marking the twentieth anniversary of AWS this month, GeekWire spoke with early builders, present AWS insiders, and longtime observers of the corporate to inform the story of how the enterprise bought began, the way it received the cloud, and what it’s up towards now.
Scalable, dependable, and low-latency
Formally, Amazon pegs the general public launch of AWS to March 14, 2006. That’s when it introduced “a easy storage service” that provided software program builders “a extremely scalable, dependable, and low-latency knowledge storage infrastructure at very low prices.”
Dubbed S3, it was Amazon’s first metered cloud service: the primary time builders might pay for precisely what they used, billed in tiny increments, with no upfront dedication.

All of this might sound mundane in a contemporary world the place the cloud and web providers are nearly like electrical energy and water, seemingly all the time there if you want them.
However keep in mind the context of that second: Fb was out there solely on school campuses. Netflix arrived on DVDs within the mail. The iPhone was nonetheless a 12 months away from being unveiled. And over at Microsoft in Redmond, they have been lastly on the brink of ship Home windows Vista.
The asterisk within the headline
The historical past of Amazon Net Companies is extra difficult than it might sound, and it’s truly a topic of some disagreement behind the scenes. There are a number of origin tales, together with one provided by Amazon itself, and others by former staff who say the corporate has tidied up the narrative through the years to form the lore round its present leaders.
Journalist Brad Stone, creator of the canonical Amazon e-book, “The All the pieces Retailer,” found this when Andy Jassy — the longtime AWS CEO who would go on to succeed Jeff Bezos as Amazon CEO — disputed points of his telling of the AWS story in a one-star evaluation.
One level of competition: the origins of EC2, the AWS service constructed by a small crew in South Africa, and the diploma to which it sprang from the method Jassy led or was born independently.
A part of the problem: Amazon, regardless of working the storehouse of the web, isn’t nice at preserving its personal historical past. The corporate, which cooperated with this piece, wasn’t capable of unearth key paperwork comparable to Jassy’s authentic AWS six-pager from September 2003.
Some former Amazon leaders take issues additional again, to a set of e-commerce APIs that Amazon launched in July 2002, permitting exterior builders to entry its product catalog and construct functions on prime of it. By that accounting, AWS is nearer to 24 years outdated.
Overcoming inside opposition
The hassle was led by enterprise chief Colin Bryar, who ran Amazon’s associates program, together with technical chief Robert Frederick, whose Amazon Anyplace crew (specializing in making Amazon’s web site and options out there on cellular units) had been working since 1999 on inside net providers that grew to become the inspiration for the exterior APIs.
Amazon in these days was on Seattle’s Beacon Hill, within the landmark artwork deco Pacific Medical Middle tower overlooking downtown. Jeff Bezos was instantly concerned from the early days, as a believer within the imaginative and prescient that Amazon’s infrastructure capabilities might develop into an enormous enterprise.
In 2002, when Bryar initially pitched a roomful of senior leaders on the thought of opening up Amazon’s product catalog and options as net providers to exterior builders, practically all of them stated no, as Frederick recalled in a current interview.
The objections piled up: it might cannibalize present enterprise, it might educate opponents. Then, as Frederick remembers it, Bezos seemed across the desk and let loose one in every of his trademark piercing laughs. Amazon’s founder needed to see what builders would do.
“Let’s do it,” Frederick recollects Bezos saying, “and let’s have them shock us.”
Later, in a July 2002 press launch asserting “Amazon.com Net Companies,” Bezos used practically similar language: “We will’t wait to see how they’re going to shock us.”
Large developer response
Inside months, tens of hundreds of builders had signed up. More and more, they have been asking for issues like storage, internet hosting, and compute, recalled Frederick, who labored at Amazon via mid-2006. He went on to discovered IoT platform Sirqul in 2013 and stays its CEO.
One other veteran of these early days agreed that the developer response to these preliminary e-commerce APIs could have opened the minds of Amazon’s leaders to the bigger potentialities.
“Possibly that’s the place Andy’s mind lit up. … Possibly that’s the place Jeff’s mind lit up,” stated Dave Schappell, referring to Jassy and Bezos. Schappell arrived at Amazon in 1998 as Jassy’s MBA intern, dropped out of Wharton to remain, and spent the following seven years working with him.
Schappell ran the associates program after Bryar, grew to become an early head of product for AWS, and employed the unique product managers. These product managers included Jeff Lawson, who went on to discovered Twilio. Schappell himself grew to become a widely known Seattle entrepreneur earlier than returning to AWS for 4 years after Amazon acquired his startup TeachStreet.
The ‘crystal-clear film second’
Jeff Barr was one of many builders who observed.
Now an Amazon VP and longtime AWS chief evangelist, Barr was working as an outdoor marketing consultant within the net providers discipline when he logged into his Amazon Associates account at some point in 2002 and observed a brand new message.

Amazon now had XML, it stated, referring to the data-formatting commonplace that allowed software program programs to speak over the web. Amazon was making its product catalog out there as an online service and connecting it to the associates program, a shocking transfer on the time.
“I clicked via, I signed up for the beta. I downloaded it straight away,” Barr recalled.
He despatched suggestions to the e-mail deal with within the documentation. They really replied.
Earlier than lengthy, he was invited to a small developer convention at Amazon’s headquarters — possibly 4 or 5 attendees on the Pacific Medical Middle tower, in a semicircular open house with a view of town. The builders sat within the center, with Amazon staff round them.
Sooner or later, one of many Amazon presenters introduced that they have been so impressed at how builders had discovered the APIs and began publishing apps inside 48 hours that they have been going to go searching the remainder of the corporate for extra providers to open up.
“That was that crystal-clear film second,” Barr stated. He turned to an Amazon worker close by and instructed her: “I’ve to be part of this.”
Creating the cloud
However what Frederick and crew had constructed was basically a method for outdoor builders to entry Amazon’s product knowledge. It was not but the cloud as we all know it immediately.
That transfer began in mid-2003, as Jassy instructed the story in a 2013 discuss at Harvard Enterprise Faculty. Jassy, then serving as Bezos’s technical advisor, was tasked with determining why software program initiatives throughout Amazon have been taking so lengthy. It turned out that engineers have been spending months constructing storage, database, and compute options from scratch.
In a gathering of six or seven people who summer season, somebody made the remark that may change the corporate’s trajectory. Jassy recalled the considering throughout his HBS discuss: “We’re fairly good at this. And if we’re having so many issues, and we don’t have something we will use externally, I think about plenty of different corporations most likely have the identical drawback.”
Across the identical time, Amazon recruited Werner Vogels, a Cornell distributed programs researcher, as its chief know-how officer. He nearly didn’t take the decision. “It’s an internet bookstore,” he recalled in a LinkedIn publish final week. “How exhausting might their scaling be?”
However the firm was wrestling with each drawback he and his colleagues had been theorizing about — fault tolerance, consistency, availability at scale — dwell in manufacturing, daily.
Elementary constructing blocks
Schappell remembers these early days as a continuous cycle of six-page memos and conferences with Jassy and Bezos, all centered on attempting to determine what to construct.
The idea that may outline AWS — breaking each functionality all the way down to its most elementary constructing block, or “primitive” — didn’t arrive absolutely shaped. “I don’t suppose he stated that on day one,” Schappell stated of Bezos. “I feel he stated it after he learn 47 of our six-pagers.”
Every primitive would stand by itself, and clients would pay just for what they used, billed in tiny increments. It was a direct rebuke to the licensing fashions of corporations comparable to database big Oracle, the place clients paid for all the things whether or not they used it or not.
Rahul Singh, who joined AWS in January 2004 as one in every of its first engineers, recalled the early technical plans going via only one layer of evaluation earlier than reaching Bezos and Jassy. (It’s the sort of streamlined decision-making that Jassy is now attempting to revive throughout the corporate.)
Fault tolerant by design
In a single early assembly, Bezos instructed the engineers he needed a server touched precisely twice: as soon as when put in within the knowledge heart, and as soon as years later when it was pulled out. In between, nothing. The software program needed to be constructed to tolerate failures, leaving useless machines behind and shifting on. It was a philosophy that may outline the structure of the cloud.
On Singh’s first day, his supervisor Peter Cohen sat him down within the lunch space and handed him a planning doc (a “PR/FAQ” in Amazon lingo) that had simply been authorized by Bezos.
“We’re calling this S4,” Cohen stated. Singh seemed on the title of the product, Easy Server-Facet Storage Service, and identified that it needs to be known as S5. Singh recollects Cohen’s response: “Yeah, you’re actually good, aren’t you? Let’s see in case you can truly construct this.”
It was ultimately shortened to Easy Storage Service, or S3.
The queuing service known as SQS had launched in 2004 in beta (including additional to the controversy over the origin story and what counts because the launch) however S3 was the primary made typically out there.
A billion-dollar enterprise?
Jassy, then the VP accountable for AWS, would maintain all-hands conferences in a convention room for 4 or 5 engineers, most of them straight out of faculty and grad faculty, as Singh recalled in an interview. Jassy ran them with the self-discipline of a a lot bigger group, repeating time and again that AWS may very well be a billion-dollar enterprise, at a time when it had no income in any respect.
Singh remembers being extremely skeptical.
“I used to be younger and naive, and I keep in mind considering: a billion, that’s a very large quantity,” Singh stated. Years later, he would joke with Jassy that the prediction had been utterly fallacious: it turned out to be a multi-billion greenback enterprise, many instances over.
In a LinkedIn publish marking the March 14 anniversary, present AWS CEO Matt Garman — who joined the corporate as a summer season intern in 2005, earlier than the launch of S3 — recalled how early clients like FilmmakerLive and CastingWords took a guess on the fledgling platform.
“That shift modified the economics of constructing know-how nearly in a single day,” he wrote.
In the meantime, in Cape City …
Whereas one crew was constructing S3 in Seattle, the compute facet of the equation was taking form 10,000 miles away. Chris Pinkham, an Amazon VP who needed to maneuver again to his native South Africa, was given permission to arrange a improvement workplace in Cape City.
His small crew constructed EC2 — the Elastic Compute Cloud — largely unbiased of the Seattle operation. The native tech group was a bit bewildered by what Amazon was doing.
“We knew this bookstore had arrived on the town,” recalled Dave Brown, who was working at a neighborhood funds startup on the time. He requested his associates who had joined what they have been doing.

“It’s sort of like, you realize, you possibly can hire a pc on the web,” they instructed him.
Brown requested in regards to the income. “Tens of {dollars} each single day,” they stated.
He remembers questioning why they have been losing their time on that.
The reply grew to become clear when EC2 launched in August 2006, 5 months after S3, including compute to storage as one other elementary constructing block of AWS and the cloud.
Early clients confirmed EC2’s vary: a Spider‑Man film used it for rendering, and Fb apps like FarmVille and Animoto spun up situations on demand, as Brown recalled.
A New York Occasions engineer used a private bank card to run optical character recognition on the paper’s scanned archives over a weekend, making the complete archive searchable, after being instructed by the corporate that it might be cost-prohibitive utilizing conventional approaches. It value a grand whole of a pair hundred bucks, even after initially screwing up and doing it over once more.
Typing forward of the characters
Brown joined in August 2007, the 14th particular person on the EC2 crew. They labored out of a tiny workplace in Constantia, the winelands a part of Cape City, throughout the freeway from vineyards.
They occupied a part of one ground of an workplace constructing. There was one convention room, and two workplaces. The remainder was open plan. The crew was 14 engineers, one product supervisor, and Peter DeSantis, the chief who got here from Seattle to assist construct the service.
The web connection was a four-megabit DSL line shared by the complete workplace, with 300 milliseconds of latency to the information facilities within the U.S. When engineers typed on their screens, every character needed to make the spherical journey throughout the ocean and again earlier than it appeared.
“You get actually good at typing forward of the place the precise characters are showing,” Brown stated.
Each morning, somebody needed to discover the VPN token to get the workplace on-line. It lasted about 10 hours earlier than it routinely reset. “Everyone can be shouting, the place’s the VPN token?”
Maintaining with demand
Sooner or later, they have been operating low on computing capability. DeSantis got here out of his workplace and instructed the engineers to close down the machines they have been utilizing for testing. That freed up sufficient capability to maintain the service going for just a few days till the following racks of {hardware} got here on-line.
Marc Brooker, now an AWS VP and distinguished engineer engaged on agentic AI, joined the EC2 crew in Cape City in 2008. He might see the complete crew from his desk. When Brown was away at some point, Brooker and the crew coated each floor of his workplace in sticky notes — the sort of prank that solely works in a small workplace the place everybody is aware of everybody else.
Brooker was drawn in by one thing he heard about in his job interview: the crew had constructed a option to make a distributed system seem like a bodily exhausting drive to the working system.
“Wow, that’s so cool,” he recalled considering. “Right here’s 20 different issues I can consider that we might do with that sort of know-how.”
That intuition, that the constructing blocks of the cloud may very well be mixed and recombined in methods nobody at Amazon had imagined, was on the core of what made AWS catch on.

“The world can be in a really completely different place in case you didn’t have the liberty to experiment, to pilot, to strive one thing, to maneuver on to another concept, that AWS first launched,” stated Mai-Lan Tomsen Bukovec, an AWS VP who has led S3 for 13 of its 20 years.
Prasad Kalyanaraman, now the AWS vice chairman who oversees world infrastructure, beforehand spent years constructing supply-chain forecasting programs for Amazon’s retail operation. Round 2011, Charlie Bell, then a senior AWS chief, requested him to assist with an issue: the crew was forecasting its compute demand utilizing spreadsheets.
He tailored the supply-chain forecasting instruments for AWS, however the cloud enterprise stored outrunning each mannequin he constructed.
“The humorous factor about forecasts is that forecasts are all the time fallacious,” he stated. “It’s very exhausting to really predict exponential progress.”
How AWS grew
It started with startups. The businesses that may outline the following period of know-how have been constructing on AWS. Airbnb, Instagram, and Pinterest all bought their begin on AWS.
John Rossman, a former Amazon exec and creator of books together with “The Amazon Method” and “Large Wager Management,” remembers Jassy pulling him apart for espresso at PacMed round 2008. Rossman had left Amazon and was working as a marketing consultant to massive companies. Jassy needed to know: did he suppose large corporations would ever be enthusiastic about on-demand computing?
Possibly, possibly not, Rossman stated. He was working with Blue Defend of California on the time, and tried to think about them operating on AWS. It was exhausting to image. On the time, the standard AWS buyer was a startup developer with little finances for infrastructure. The concept that an enormous insurance coverage firm would run on AWS appeared like a stretch.
“I used to be somewhat little bit of a pessimist on it,” Rossman stated.
However quickly issues began to vary.
Netflix moved its streaming infrastructure to AWS beginning in 2009, a choice that carried specific weight as a result of it competed with Amazon in video. In 2013, the CIA awarded AWS a contract over IBM, signaling that the platform was trusted on the highest ranges of safety.
Microsoft suggestions its hat
AWS’s pricing mannequin, during which clients paid just for what they used, was a direct menace to the licensing companies of tech’s outdated guard. Whether or not burying their heads within the sand or simply preoccupied, the businesses that may develop into the largest AWS rivals have been sluggish to reply.
Microsoft didn’t unveil its cloud platform — code-named “Crimson Canine,” and initially launched as “Home windows Azure” — till October 2008, greater than two years after S3 debuted. Invoice Gates had left his day-to-day function at Microsoft just a few months earlier. The corporate was nonetheless recovering from the aftermath of the Vista flop.
“I’d wish to tip my hat to Jeff Bezos and Amazon,” stated Ray Ozzie, then Microsoft’s chief software program architect, on the launch occasion — a uncommon public acknowledgment of a competitor’s lead.
Azure didn’t attain common availability till 2010, and its early method was extra of a platform for functions, not the uncooked infrastructure that made AWS so common with builders. It took years to construct out comparable choices.
Google launched App Engine, a platform for operating functions, in 2008, however didn’t provide uncooked computing infrastructure to rival EC2 till Compute Engine arrived in 2012.
‘The AWS IPO’
For years, AWS grew in one thing near silence. Amazon stated little in regards to the general progress, and didn’t escape the monetary outcomes for the enterprise in its quarterly earnings reviews.
Then, in April 2015, Amazon reported its first-quarter earnings with AWS damaged out intimately for the primary time, and it shocked the trade. The enterprise had a $6 billion annual income run price and was rising 50% a 12 months.

AWS generated greater than $250 million in revenue that quarter alone, with working margins round 17%. This was a stark distinction with the remainder of Amazon, scraping by on conventional retail margins of two% to three%. AWS was making considerably extra revenue on each greenback of income.
The hosts of the Acquired podcast, in their in depth 2022 historical past in regards to the rise of Amazon Net Companies, would later name this second “the AWS IPO,” in impact.
Amazon inventory jumped 15% on the information.
“I used to be blown away,” stated Schappell, the early AWS product chief who left in 2004 and later listened to the primary AWS earnings breakout whereas coaching for a marathon. For years, he had assumed Amazon was shedding billions on AWS. The truth was the other: AWS had develop into so worthwhile that it was successfully bankrolling Amazon’s future.
The margins stored climbing, reaching 35% by early 2022.
Then the pandemic cloud growth light. Inflation spiked amid broader financial uncertainty. Clients scrutinized their cloud payments and pulled again spending. AWS income progress fell from 37% to 12% over the course of the 12 months, the slowest in its historical past. Margins fell to 24%.
The ChatGPT second
Then all the things modified, for Amazon and everybody else.
On November 30, 2022, OpenAI launched ChatGPT, with little fanfare at first. The buyer AI chatbot shortly grew to become the fastest-growing utility in historical past, reaching 100 million customers in two months, and sending the know-how world right into a frenzy within the ensuing months.
For AWS, the stakes have been enormous. Each main wave of know-how over the earlier 15 years, from cellular to social to streaming to e-commerce, had been constructed on its platform.
If AI was the following wave, AWS wanted to paved the way once more.
Amazon was removed from absent in AI. AWS had launched SageMaker in 2017, giving builders instruments to construct and deploy machine studying fashions. It had launched customized AI chips for inference and coaching. Alexa, the voice assistant, had been processing pure language queries since 2014. Amazon had spent a few years and billions of {dollars} on machine studying.
However none of it seemed or labored like ChatGPT. The brand new mannequin might write code, draft essays, reply complicated questions, and maintain a dialog. It was not a characteristic. It was a product folks needed to make use of. And it was constructed by an AI lab operating on Microsoft Azure.
‘AWS sneaked in there’
The irony: OpenAI didn’t begin on Microsoft’s cloud. It launched on AWS.
When the AI lab debuted in December 2015, AWS was listed as a donor. OpenAI was operating its early analysis on Amazon’s infrastructure below a deal price $50 million in cloud credit.
Microsoft CEO Satya Nadella discovered about it after the very fact. “Did we get known as to take part?” he wrote to his crew that day, in an e-mail that surfaced solely lately in a courtroom submitting from Elon Musk’s go well with towards Microsoft and OpenAI. “AWS appears to have sneaked in there.”
Microsoft moved quick. Inside months, Nadella was courting OpenAI. The AWS contract was up for renewal in September 2016. “Amazon began actually dicking us round on the [terms and conditions], particularly on advertising and marketing commits,” Sam Altman wrote to Musk, who was then OpenAI’s co-chair. “And their providing wasn’t that good technically anyway.”
By that November, Microsoft had received the enterprise.
Six years later, with the launch of ChatGPT, that guess paid off in methods nobody might have predicted. Microsoft inventory surged. Amazon, like many others within the trade, was scrambling to determine all of it out — abruptly attempting to maintain up with the way forward for a promote it had lengthy outlined.
Pivoting to generative AI
The AWS CEO on the time was Adam Selipsky, who had helped construct the enterprise from its earliest days earlier than leaving in 2016 to run Tableau, the information visualization firm. He returned in Could 2021 to guide AWS after Jassy was promoted to succeed Bezos as Amazon CEO.
In a Could 2024 interview with Selipsky, on one in every of his final days within the function, GeekWire requested him instantly if Amazon had been caught flat-footed by the rise of generative AI.
After a member of his crew interjected to say the query appeared to learn by studying too many Microsoft press releases, Selipsky dismissed the concept AWS was behind.
Whereas that narrative might need “extra sizzle” and generate clicks, Selipsky stated, the truth was completely different, as evidenced by Amazon’s years of labor in AI and machine studying.
AWS had introduced Inferentia, a chip for deep studying, in 2018, constructing on its 2015 acquisition of Annapurna Labs, the Israeli chip startup. It started work on CodeWhisperer, an AI coding assistant, in 2020 — earlier than GitHub Copilot existed, the corporate notes. In 2021, it launched Trainium, a chip designed to coach fashions with 100 billion or extra parameters.

On the identical time, Selipsky acknowledged that AWS had “pivoted many hundreds of individuals from different attention-grabbing, necessary initiatives to work on generative AI” — a scale of reallocation signaling one thing aside from enterprise as common inside the corporate.
Tomsen Bukovec, who now oversees AWS’s core knowledge providers together with S3, analytics, and streaming, stated her crew’s response was much less a pivot than a technique of studying.
They educated themselves on what the know-how meant for his or her providers, she stated, and thought deeply about what it might seem like for AI to each create and eat knowledge at scale.
The query her crew began asking in late 2022: what does the world seem like when 70 to 80 % of the utilization of your providers comes via AI?
“AI goes to make use of it at 10 instances to 100 instances the speed of a human, and it’s going to do all of it day lengthy, on a regular basis, 24 hours,” she stated. “AI by no means goes to sleep.”
Scrambling to fulfill the second
The strain to catch up in generative AI was felt throughout the corporate. In a lawsuit filed in Los Angeles Superior Court docket, an AI researcher who labored on Amazon’s Alexa crew alleged {that a} director instructed her to disregard inside copyright insurance policies as a result of “everybody else is doing it.”
The criticism described ChatGPT’s launch in late November 2022 as inflicting “panic inside the group.” Amazon has denied the allegations, and the case remains to be pending.
On Amazon’s earnings name in early February 2023 — two months after ChatGPT’s launch — Amazon CEO Andy Jassy didn’t focus on generative AI or massive language fashions.

By the following quarter’s name, in late April 2023, he spoke about it for practically ten minutes, describing it as “a exceptional alternative to remodel just about each buyer expertise that exists.”
In September 2023, the corporate introduced an funding of as much as $4 billion in Claude maker Anthropic, the AI startup based by former OpenAI researchers. The funding would ultimately develop to $8 billion — which appeared like rather a lot on the time.
Selipsky left AWS in mid-2024. Garman, whom Selipsky had employed as a product supervisor in 2006, succeeded him as CEO, charged with main the cloud enterprise into the brand new period.
From CodeWhisperer to Bedrock
The roots of Amazon’s response truly predated ChatGPT by greater than two years, though it confronted preliminary skepticism internally. In 2020, Atul Deo, an AWS product director, wrote a six-page memo proposing a generative AI service that might write code from plain English prompts.
Jassy, who was nonetheless main AWS on the time, wasn’t offered. His response, as Deo later instructed Yahoo Finance, was that it appeared like a pipe dream. The undertaking launched in 2023 as CodeWhisperer, an AI coding assistant.
However by then, ChatGPT had redrawn the panorama, and the crew realized they might provide one thing broader: a platform giving clients entry to a variety of basis fashions via a single service. AWS known as it Bedrock. The title mirrored an ambition to do for AI fashions what the corporate had executed years earlier with its Relational Database Service, which wrapped MySQL, Oracle, and different database engines in a standard administration layer.
Bedrock would do the identical for giant language fashions.
The choice to supply a number of fashions relatively than push a single in-house possibility was deliberate, and rooted in a sample AWS had adopted for years. It introduced a number of CPUs to the cloud: AMD, Intel, and its personal Graviton. It provided Nvidia GPUs alongside its personal Trainium chips.
Quickest-growing AWS service
Amazon’s view is that selection drives competitors, which drives down costs for purchasers.
“We knew there was by no means going to be one mannequin to rule all people,” stated Dave Brown, the AWS vice chairman who oversees EC2, networking, and customized silicon. “And even the perfect mannequin was not going to be the perfect mannequin on a regular basis.”
Bedrock launched in preview in April 2023 and reached common availability that September, with fashions from Anthropic, Meta, and others alongside Amazon’s personal. Two years later, it had develop into the fastest-growing service AWS had ever provided, with greater than 100,000 clients.
On Amazon’s most up-to-date earnings name, Jassy described it as a multi-billion-dollar enterprise, with buyer spending rising 60% from one quarter to the following.
On the finish of 2024, Amazon added its personal entry to the mannequin race. The corporate launched a household of basis fashions known as Nova, positioned as a lower-cost, lower-latency different to the third-party fashions on the Bedrock platform.

As Fortune’s Jason Del Rey noticed, it was a web page from the e-commerce playbook: construct {the marketplace} first, then inventory it with a home model. Simply as Amazon sells items from hundreds of retailers alongside its personal private-label merchandise, Bedrock provided fashions from Anthropic, Meta, and others, and now Amazon’s personal fashions to go together with them.
At re:Invent in late 2025, AWS pushed additional, unveiling what it known as “frontier brokers” — autonomous AI programs designed to work for hours or days with out human involvement.
One, constructed into Amazon’s Kiro coding platform, can navigate a number of code repositories to repair bugs whereas a developer sleeps. Final month, the Monetary Occasions reported that Amazon’s personal AI coding instruments induced a minimum of one AWS service disruption. Amazon acknowledged the incident however publicly disputed points of the reporting, citing a misconfigured function, not the AI itself.
The $200 billion guess
Like its rivals, AWS can be constructing the bodily infrastructure to again it up. In 2025, lower than a 12 months after it was introduced, AWS opened Challenge Rainier, one of many world’s largest AI compute clusters, centered in Indiana, powered by greater than 500,000 of Amazon’s Trainium2 chips.
Named after the mountain seen from Seattle, Rainier was constructed to coach and run Anthropic’s subsequent technology of Claude fashions, utilizing Amazon’s personal Trainium chips relatively than Nvidia GPUs.
Kalyanaraman, the AWS vice chairman who oversees world infrastructure, stated the undertaking pressured AWS to rethink its provide chain from the bottom up. The purpose was to reduce the time between a chip leaving its fabrication facility and serving a buyer workload.
Rainier was constructed at a quicker tempo than something AWS had ever executed, Kalyanaraman stated, with greater than 100,000 Trainium chips out there to Anthropic in below a 12 months. Nevertheless it wasn’t a one-off. He known as it the brand new template for the way AWS would construct AI infrastructure going ahead.
Then, late final month, got here the deal that introduced the story full circle.
OpenAI — the corporate that launched on AWS in 2015 and left for Microsoft Azure the next 12 months — introduced a partnership with Amazon that included as much as $50 billion in funding and a cloud settlement price greater than $100 billion over eight years.
OpenAI dedicated to run workloads on Amazon’s customized Trainium chips, making it the second main AI lab after Anthropic to take action. The 2 corporations had been speaking since a minimum of Could 2023, based on SEC filings, however Microsoft’s proper of first refusal on OpenAI’s compute had blocked a deal till these restrictions have been loosened within the newest renegotiation.

By late 2025, AWS income was rising at its quickest tempo in additional than three years, up 24% to $35.6 billion 1 / 4. The corporate disclosed that its Trainium and Graviton chips had reached a mixed annual income run price of greater than $10 billion. Bedrock had surpassed 100,000 clients and was producing income within the billions.
The aggressive image was additionally coming into sharper focus.
In mid-2025, Microsoft disclosed standalone Azure income for the primary time: $75 billion a 12 months, up 34%. Google Cloud had crossed a $50 billion annual run price. AWS, at greater than $116 billion a 12 months on the time, was nonetheless bigger — however not operating away with the market.
All of this helps to clarify Amazon’s file capital spending. On the corporate’s newest earnings name, Jassy defended plans to spend $200 billion this 12 months, most of it on AI infrastructure.
The determine is so massive it might eat practically all of Amazon’s working money stream. Going through a Wall Avenue backlash, Jassy known as synthetic intelligence “an awfully uncommon alternative to endlessly change the dimensions of AWS and Amazon as an entire.”
What’s subsequent: Bear and bull instances
Longtime observers are divided on the corporate’s AI guess.
Corey Quinn, a cloud economist who works with AWS clients via his Duckbill consultancy, sees little actual‑world traction for Amazon’s Nova fashions. “You realize somebody is an Amazon worker after they discuss Nova, as a result of nobody else is,” he stated.
Some companies bypass Amazon’s Bedrock platform completely due to capability constraints and slower speeds, he stated, going to third-party suppliers like Anthropic relatively than inserting Bedrock as a “intermediary” — until they’re attempting to retire their dedicated AWS spend.
Wanting ahead, Quinn pointed to a historic parallel. Twenty years in the past, Cisco was essentially the most invaluable firm on the planet, the spine of the web. At the moment it’s a worthwhile however largely invisible utility. AWS, he stated, may very well be headed for a similar destiny.
“It’s very clear that there might be a fortieth anniversary for AWS, as a result of that inertia doesn’t go away,” Quinn stated. “However will it’s on the heart of tech coverage and big corporations, or is it going to be much more just like the Cisco of immediately?”
Om Malik, the veteran tech author, forged a essential eye on Amazon’s OpenAI funding.
By his math, Amazon is paying roughly 16 instances extra per proportion level of OpenAI than Microsoft did, with not one of the unique IP rights, income share, or main API entry that Microsoft locked up years in the past. The price of being late, Malik wrote, is measured in billions.

Rossman, the previous Amazon government who was as soon as skeptical about AWS demand from large enterprise, sees a unique image. He agrees that AWS is robust in infrastructure, the picks and shovels of the cloud. However the place Quinn sees that as a ceiling, Rossman sees it as a moat.
The fashions are the commodity, Rossman contends. They leapfrog one another continually. What issues is all the things the fashions run on and thru: the chips, the servers, the information facilities, the ability. AWS is constructing extra of that stack than most opponents.
“That’s the place the worth is,” he stated.
The long-term winners, he stated, would be the corporations that ship the perfect AI on the lowest value per token. That’s the place AWS’s vertical integration — from Trainium chips to Bedrock to the information heart to probably extra sooner or later — provides it a bonus opponents can’t simply replicate.
As for the chance of spending an excessive amount of, Rossman put it merely: you must resolve which facet of historical past you’d choose to err on: overbuilding or underbuilding. Amazon isn’t taking probabilities.
In an inside all-hands assembly final week, Jassy stated AI might assist AWS attain $600 billion in annual income, double his personal prior estimate, Reuters reported. He had been considering for years that AWS may very well be a $300 billion enterprise in a decade. AI, he stated, modified the maths.
Any method you add it up, it’s plenty of steaks.

