Compute

High Performance Compute: A Primer

BitOoda Special Report 5/12/23

Vivek Raman
Key Takeaway #1

In an increasingly digitized world, the growth of new industries (e.g., AI) is driving demand for industrial-grade high performance compute.

Key Takeaway #2

Compute capacity is a new type of commodity that is universally demanded across end users (e.g., high frequency trading, computational bio, graphics rendering, and AI).

Key Takeaway #3

The growth of Compute is leading to specialization and the formation of a “Compute Stack” consisting of data centers, hardware and software players, and power.

Key Takeaway #4

Compute represents a superset of today’s high-growth industries.

We live in an increasingly digitized world where the demand for computational resources is surging. The blossoming renaissance in artificial intelligence technology is in the spotlight, leading the charge in demand for Compute. ChatGPT has burst onto the scene to become a household name, reaching 100mm users just two months after its launch. For reference, it took TikTok nine months and Instagram more than two years to reach the 100mm user mark. Not only has AI found instant product-market fit, but it has sparked a surge in the demand for Compute infrastructure – from hardware (GPUs) to software to cloud and datacenter services.​

Compute is a finite resource. In fact, as a16z, the leading VC fund, stated: “Compute capacity on specific hardware is a commodity.” Moreover, it is an expensive commodity. Today, many startup companies in the AI space are spending 80%+ of their total capital raised on Compute resources! ChatGPT itself is estimated to cost $700,000 per day to run! Nevertheless, it is suggested that demand for AI Compute outstrips the supply of Compute (available hardware, software, and cloud/datacenter setups) by a factor of 10 to 1. Clearly, despite high costs, the demand for Compute is increasing. ​

Zooming out, how do we define this all-encompassing term we call “Compute” – or more specifically, “high performance compute”? And why is surging demand for Compute entering the fray now?​

”Compute” is not a foreign concept – it is just a concept that is abstracted away for most users into the devices they interact with on a daily basis. On the work front, people use word processors, spreadsheets, e-mail, and the Internet all day, on both work and personal computers and devices. On the personal front, people play video games, stream movies and TV, interact on social media, and browse the Internet – all these tasks can be done on PCs and smartphones).​

However, these consumer-grade tasks are fairly low-level and can be accomplished with standardized hardware setups (PCs/laptops/phones) and a decent Internet connection. High Performance Compute (HPC) refers to the set of industrial-grade applications that require more powerful computers and the accompanying infrastructure (hardware, software, power, bandwidth, data storage) to solve complicated tasks in cutting-edge industries – such as AI, graphics rendering, computational biology, high frequency trading, and BTC mining, among other applications.​

Where consumer-grade Compute tasks can be accomplished with minimal infrastructure buildout, the HPC space has resulted in a flourishing ecosystem of infrastructure players, including data centers, cloud service providers, hardware and software players, power operators. These players represent the “Compute Stack” that will evolve into a more robust infrastructure ecosystem. A tertiary effect of the increased energy requirements from a growing Compute sector could be the use of sustainable, renewable energy to power energy-intensive parts of the Compute Stack, such as data centers and cloud service providers.​

The rise of Compute represents a significant trend that is larger than any one industry – whether AI, blockchain, biology, or rendering. We believe that in the end, all roads will flow through Compute, and Compute will be the superset of the high-growth industries. Finally, we conclude that Compute presents a positive-sum ecosystem for all players involved – helping improve access to consumers, optimizing business infrastructure for suppliers, and resulting in a greener, renewable grid for all.​

Premium Content

Unlock exclusive insights with our cutting-edge digital finance platform. Gain access to next-gen data analytics and digital asset products crafted with applied science. Subscribe now to stay ahead of the curve.

  • Research and Consulting
  • Investment Banking and Advisory
  • Sales and Origination
  • HPC and Power Advisory
Request Access Now!

We live in an increasingly digitized world where the demand for computational resources is surging. The blossoming renaissance in artificial intelligence technology is in the spotlight, leading the charge in demand for Compute. ChatGPT has burst onto the scene to become a household name, reaching 100mm users just two months after its launch. For reference, it took TikTok nine months and Instagram more than two years to reach the 100mm user mark. Not only has AI found instant product-market fit, but it has sparked a surge in the demand for Compute infrastructure – from hardware (GPUs) to software to cloud and datacenter services.​

Compute is a finite resource. In fact, as a16z, the leading VC fund, stated: “Compute capacity on specific hardware is a commodity.” Moreover, it is an expensive commodity. Today, many startup companies in the AI space are spending 80%+ of their total capital raised on Compute resources! ChatGPT itself is estimated to cost $700,000 per day to run! Nevertheless, it is suggested that demand for AI Compute outstrips the supply of Compute (available hardware, software, and cloud/datacenter setups) by a factor of 10 to 1. Clearly, despite high costs, the demand for Compute is increasing. ​​

Zooming out, how do we define this all-encompassing term we call “Compute” – or more specifically, “high performance compute”? And why is surging demand for Compute entering the fray now?​

”Compute” is not a foreign concept – it is just a concept that is abstracted away for most users into the devices they interact with on a daily basis. On the work front, people use word processors, spreadsheets, e-mail, and the Internet all day, on both work and personal computers and devices. On the personal front, people play video games, stream movies and TV, interact on social media, and browse the Internet – all these tasks can be done on PCs and smartphones).​

However, these consumer-grade tasks are fairly low-level and can be accomplished with standardized hardware setups (PCs/laptops/phones) and a decent Internet connection. High Performance Compute (HPC) refers to the set of industrial-grade applications that require more powerful computers and the accompanying infrastructure (hardware, software, power, bandwidth, data storage) to solve complicated tasks in cutting-edge industries – such as AI, graphics rendering, computational biology, high frequency trading, and BTC mining, among other applications.​

Where consumer-grade Compute tasks can be accomplished with minimal infrastructure buildout, the HPC space has resulted in a flourishing ecosystem of infrastructure players, including data centers, cloud service providers, hardware and software players, power operators. These players represent the “Compute Stack” that will evolve into a more robust infrastructure ecosystem. A tertiary effect of the increased energy requirements from a growing Compute sector could be the use of sustainable, renewable energy to power energy-intensive parts of the Compute Stack, such as data centers and cloud service providers.​

The rise of Compute represents a significant trend that is larger than any one industry – whether AI, blockchain, biology, or rendering. We believe that in the end, all roads will flow through Compute, and Compute will be the superset of the high-growth industries. Finally, we conclude that Compute presents a positive-sum ecosystem for all players involved – helping improve access to consumers, optimizing business infrastructure for suppliers, and resulting in a greener, renewable grid for all.​

​What is High Performance Compute?

In the traditional model for computing, consumers (individuals, organizations) would use their own computer hardware installed locally (in their home or in an office) to perform tasks. These tasks were retail-grade and included simple, small-scale applications like the following: ​

  • Data Processing / Spreadsheets​
  • Internet Browsing / Social Media​
  • Video Streaming​
  • Gaming​

High Performance Compute tackles advanced tasks that require industrial-grade computers. While a single consumer-grade PC setup has limited capacity, high performance compute is used to solve complex problems, conduct advanced simulations, or perform large-scale data processing tasks that are beyond the capabilities of a single computer or workstation.​

To achieve this, high performance compute systems consist of clusters of powerful computers or supercomputers, which are interconnected via high-speed networks to provide increased processing capacity and computational efficiency. This can be done using datacenters and the cloud.​

High Performance Compute refers to the ecosystem of industrial-grade applications that require powerful computers and the accompanying infrastructure (hardware, software, power, bandwidth, data storage) to solve complicated tasks in cutting-edge industries, including (but not limited to):

  • Artificial Intelligence​
  • Graphics Rendering​
  • Computational Biology​
  • High Frequency Trading​

In a world with increasingly complex applications (training self-driving cars, more immersive gaming, using ChatGPT), the demand for High Performance Compute (which we refer to as “Compute” or “HPC” in this report) and its infrastructure is likely to rapidly increase.​

Supply and Demand for High Performance Compute

  • While individual, consumer-grade computers can operate in a closed system – plug a computer into an outlet at home and connect to the Internet to perform basic tasks – HPC refers to a more comprehensive ecosystem of infrastructure.​
  • While HPC can take place on-premise (for large corporations or consumers that can set up their own architecture), the Compute space has been dominated by Cloud Service Providers (AWS, Google Cloud, Microsoft Azure), as well as public datacenter players (used by the CSPs as well as individual operators such as Coreweave).​
  • The Compute ecosystem is also comprised of (1) hardware manufacturers for processor chips (such as GPUs, FPGAs, and ASICs), hard drives (data storage), and broadband (uninterrupted, reliable connectivity); (2) software vendors; and (3) system integrators to connect these various Compute pieces.​

  • In the next section, we will explore the HPC demand side, which requires increasingly intensive Compute infrastructure.​

Applications of High Performance Compute

Compute Applications - Artificial Intelligence

  • The most high-profile demand driver for Compute resources comes from the ongoing rise in Artificial Intelligence. The growth of AI is entirely constrained by Compute infrastructure, and estimates suggest that the demand for AI Compute exceeds supply by a factor of 10x. ​
  • Estimates suggest that ChatGPT costs $700,000 per day to run, or $0.36 per query. Compute costs are by far the largest cost for AI companies, with VC-funded AI companies using up to 80% of their total capital raised for Compute resources.​
  • Why are Compute costs for AI so high? The underlying algorithms, which need to be run on HPC infrastructure, are much higher than normal consumer-grade computing tasks.  For example, training ChatGPT’s LLM (large language models) to generate a single word for ChatGPT is incredibly Compute-intensive.​

Figure: AI Applications
Source: https://a16z.com/2023/04/27/navigating-the-high-cost-of-ai-compute/https://www.windowscentral.com/microsoft/chatgpt-costs-dollar700000-per-day-to-run-which-is-why-microsoft-wants-to-make-its-own-ai-chips

Compute Applications - Graphics Rendering

  • Compute is instrumental to graphics rendering by enabling the efficient processing and generation of images or videos from 3D models, scenes, and other data. In rendering, Compute is primarily used in the form of performing complex calculations required to convert 3D models and scenes into 2D images or animations that can be displayed on screens.​
  • In a world of increasing digitization (Netflix economy, increased animation, moving to a ”metaverse,” and the move toward augmented and virtual reality), Compute requirements for graphics are likely to increase.​
  • As a case study, Pixar has its own “render farm” – a supercomputer (top 25 globally by size) consisting of 2,000 machines and 24,000 cores, and it still took 2 years to render the film “Monsters University.”​
  • Pixar opened access to its render Compute in a service called “RenderMan.”​

Figure: Pixar's Renderman Platform
Source: https://renderman.pixar.com/product, https://nofilmschool.com/why-pixars-24000-core-supercomputer-still-takes-24-hours-render-each-frame

Compute Applications - Computational Biology

  • Computational biology augments the field of biology with software and technology to use mathematical models, computational techniques, and algorithms to analyze and understand biological systems. As a result, computational bio harnesses Compute to handle complex calculations and simulations, to enable healthcare companies and researchers to study biological systems at from micro to macro scales. ​
  • AlphaFold, developed by Google’s Deepmind, is designed to predict the three-dimensional structure of proteins with remarkable accuracy using a  Compute-intensive algorithmic process. AlphaFold has made a significant impact on the field of structural biology, as determining protein structures has traditionally been a slow, expensive, and labor-intensive process.​
  • Running AlphaFold requires GPUs and other non-consumer grade hardware.​

Figure: Alphafold
Source: https://www.deepmind.com/research/highlighted-research/alphafold, https://pubs.acs.org/doi/10.1021/acs.jpcb.2c04346

Compute Applications - High Frequency Trading

High frequency trading is a type of algorithmic trading in which financial instruments are bought and sold at extremely high speeds, often within fractions of a second. Compute infrastructure therefore plays a vital role in enabling HFT strategies, as it provides the necessary computational power and speed to execute trades and analyze market data rapidly. Applications for Compute within the high frequency trading realm include:​

  • Low-latency trading: minimizing the time between receiving market data and executing trades​
  • Data processing: ingesting large amounts of data and running algorithms for trading signals​
  • Colocation: setting up trading servers next to exchange servers to minimize distance and maximize speed of info transmission​
  • Hardware acceleration: using GPUs, FPGAs, and advanced hardware to gain an edge over competitors​

Figure: High Frequency Trading
Source: https://www.quora.com/Is-algorithm-trading-same-as-high-frequency-trading

The Compute Infrastructure Landscape

Who Are the Players?

  • Compute infrastructure requirements are complex, resulting in the growth of several sub-industries that play a part in the overall Compute ecosystem. ​
  • These players can be virtual (software) or physical (hardware). Ultimately, as each sub-industry grows, the entire “Compute Stack” can be disintermediated over time to counter a monopolistic or oligopolistic structure.​
  • As each player below competes on price, efficiency, and scalability, the winners of this race will be the end consumer, and the losers will be the large incumbents as competition drives down price and margins.​
  • In this section, we will explore different players in the “Compute Stack” in detail: what role they play in the ecosystem, example companies, and example products.​

Compute Infrastructure - Data Centers

  • Data centers are transforming from real estate and infrastructure plays into optimizable growth assets that can be viewed as “Compute refineries.”​
  • This “Compute refinery” model implies that data centers can optimize their operations for lowest cost (via best hardware, highest bandwidth, most reliable storage, and lowest power cost) and then provide their operational infrastructure to cloud service providers and other end consumers.​
  • Data centers are classified by tiers (categorized by uptime), with Tier I being the most basic (99.671% uptime, lowest redundancy) and Tier IV being the highest “fault-tolerant” model (99.995% uptime, highest level of redundancy). Different HPC consumers will have differing uptime needs.​
  • Many AI applications (particularly model training) do not require a high level of uptime, low latency, or redundancy and are currently overpaying in a resource-constrained environment. This is bottlenecking both AI growth and compute availability and can be better optimized.

Figure: Physical Data Center
Source: https://datacenterlocations.com/equinix/

Compute Infrastructure - Cloud Service Providers

  • Cloud Service Providers (CSPs) are today’s front-facing brands for Compute. The largest players – Amazon Web Services, Microsoft Azure, Google Cloud – are the dominant players in the space, and they are accelerating their lead by expanding into additional verticals.​
  • For example, the largest CSPs have vertically integrated to run their own data centers (rather than sharing data center capacity) and are even expanding into application-level integration (e.g., Microsoft’s $10+ bn investment into OpenAI to integrate ChatGPT into its services). ​
  • CSPs also have a software component, with front-end access points for consumers to manage their cloud compute sessions.​
  • Ultimately, we could see smaller entrants into the CSP space (e.g., CoreWeave) spark increased competition, benefiting the end users.​
Figure: CSPs
Source: Various Company Logos

Compute Infrastructure - Hardware Players

  • One of the main bottlenecks for the growth of Compute availability lies in the scarcity of hardware supply and the resulting hardware acceleration race to increase efficiency in Compute processes.​
  • While consumer-grade personal computing can be achieved on CPUs within a PC, large scale applications (e.g., AI) require more advanced hardware (GPUs, FPGAs, ASICs) run in parallelized setups with optimized power, uptime, and maintenance. This requires both a data center to physically hold all the components, and more importantly, the chips and processors.​
  • Hardware players such as Nvidia and AMD are the bellwether producers of GPUs and are developing next-generation solutions specifically tailored for applications like AI. FPGA and ASIC players are also joining the race.​
Figure: Hardware Players
Source:  https://www.nvidia.com/en-us/data-center/a100/

Compute Infrastructure - Software Players

  • Compute infrastructure continues to shift to modern platforms that utilize a cloud operating model, with a significant emphasis on cloud system software and containers.​
  • With cloud-based software, users can quickly deploy new applications and updates, reducing the time-to-market for new products and features.​
  • Software in the cloud can be easily scaled up or down as per the user's demand without having to worry about investing in additional hardware.​
  • These technologies were accelerated during the pandemic. Coupled with the shift to remote work, they are quickly becoming the standard for modern enterprise IT.​

A logo of a cloudDescription automatically generated with low confidence
A blue and white logoDescription automatically generated with medium confidence
Figure: Software Players in the Compute Stack
Source: Various Company Logos

Compute Infrastructure - Power Players

  • Although many sources have identified that demand for Compute is surging, the importance of power players is rarely emphasized. Estimates suggest that ChatGPT consumed as much electricity as 175,000 people in January 2023 alone! As the new high performance compute industry surges, the associated power requirements will also surge, and along with that will come a regulatory focus.​
  • Just as BTC mining presented an opportunity to plug into renewable energy sources and spark a green energy initiative, the same trend could occur (on a much larger scale) with the Compute ecosystem. Indeed, Compute is the superset of BTC mining and various other applications, and with increased power consumption comes the opportunity to facilitate more sustainable, renewable energy generation for the blossoming Compute industry. ​

Figure: Power Players in Compute
Source: https://www.bloomberg.com/news/articles/2023-03-09/how-much-energy-do-ai-and-chatgpt-use-no-one-knows-for-sure#xj4y7vzkg

Compute Infrastructure - BitOoda's Role in the Compute Ecosystem

As Compute transforms into a commodity, BitOoda has developed a flywheel of products and services designed to help clients gain access to the broad spectrum of resources and market solutions across the Compute landscape.​

BitOoda can help BTC and previous ETH miners looking at purpose-built data centers optimize their assets and strategies for Compute.

Conclusions: The Great Convergence

The evolution of the Compute ecosystem is well underway. Demand for Compute is accelerating from applications such as AI, rendering, computational biology, finance, and blockchain. Meanwhile, the supply of Compute, while constrained, is evolving into a “Compute Stack” consisting of data centers, cloud service providers, hardware and software providers, and power producers.​

Our key themes to watch in the Compute space include:​

1. All Roads Flow Through Compute

In an increasingly digitized world, the demand for Compute power is only increasing. Whether from new industries like AI or established industries like finance, the common denominator is Compute. Additionally, Compute has evolved beyond the consumer-grade, personal computer capacity; data centers will evolve into “Compute refineries.” ​

2. Compute is the Ultimate Growth Industry

The highest growth industries in the digital world will be powered by Compute, just like the growth engines of the industrial world were powered by oil / fuel. Indeed, Compute is transforming into a digital commodity and is the underlying fuel for growth of new sectors.​

3. The Great Convergence: 1 + 1 = 3

Many industries operate in zero sum game environments. We believe the growth of Compute will result in a positive sum outcome, benefiting the entire “Compute Stack” – from the data centers, to the cloud service providers, to the hardware/software players, and ending with the consumer. Additionally, why do we think Compute and power are a perfect marriage? The mandate for renewable assets is growing alongside the growth of Compute, and Compute can help facilitate a global energy transition toward renewable, sustainable power.​

Risks to the Rapid Growth of Compute

Like any new industry that has significant impact on consumers, corporations, and the broader infrastructure ecosystem, the growth of Compute will present risks that must be understood and addressed:​

Downside Risks

  • Regulatory Concerns – if the Compute space continues to consolidate into a monopolistic or oligopolistic structure, antitrust regulators may step in and require dispositions or put limits to the power of the large, dominant cloud service providers. The mitigant is for the decentralization of the Compute Stack to address concentration risk.​
  • Environmental Impact – increasing demand for Compute resources translates into increased demand for natural resources (such as power, materials for hardware development, and real estate). Without climate-aligned growth, this could have a negative environmental impact.​
  • Privacy / Data Security the disintermediation of the Compute Stack into a wider range of players (beyond the large public players) could introduce security and privacy concerns for users. This can be mitigated by privacy and security regulations for firms in the Compute industry.​

Upside Risks

  • Regulatory Integration – just like the financial industry has been regulated to separate different players in the “financial stack” (broker dealer, ATSs, market makers, etc.), regulatory integration could help the ”Compute Stack” flourish and benefit each sub-industry (data centers, hardware providers, etc.).​
  • Decentralized Market – an increasingly disintermediated market will help (1) reduce concentration risk and (2) increase competition for more players to participate in the growing Compute space.​
  • Accelerating Feedback Loops – with a more efficient Compute Stack, the innovation flywheel could accelerate with demand drivers like AI and computational biology combining to deliver new products (novel new drugs, etc.). This would reinforce the value of Compute and further drive demand.​
  • Reducing Redundancy – a mature “Compute Stack” would result in lower need for redundancy, which could ease the power and resource requirements for Compute users and could ultimately result in a greener grid as well as more efficient Compute markets.​

Disclosures

Purpose

This research is only for the clients of BitOoda. This research is not intended to constitute an offer, solicitation, or invitation for any securities and may not be distributed into jurisdictions where it is unlawful to do so. For additional disclosures and information, please contact a BitOoda representative at info@bitooda.io.​

Analyst Certification

Vivek Raman, the primary author of this report, hereby certifies that all of the views expressed in this report accurately reflect his personal views, which have not been influenced by considerations of the firm’s business or client relationships.​

Conflicts of Interest

This research contains the views, opinions, and recommendations of BitOoda. This report is intended for research and educational purposes only. We are not compensated in any way based upon any specific view or recommendation.​​

General Disclosures

Any information (“Information”) provided by BitOoda Holdings, Inc., BitOoda Digital, LLC, BitOoda Technologies, LLC or Ooda Commodities, LLC and its affiliated or related companies (collectively, “BitOoda”), either in this publication or document, in any other communication, or on or through http://www.bitooda.io/, including any information regarding proposed transactions or trading strategies, is for informational purposes only and is provided without charge.  BitOoda is not and does not act as a fiduciary or adviser, or in any similar capacity, in providing the Information, and the Information may not be relied upon as investment, financial, legal, tax, regulatory, or any other type of advice. The Information is being distributed as part of BitOoda’s sales and marketing efforts as an introducing broker and is incidental to its business as such. BitOoda seeks to earn execution fees when its clients execute transactions using its brokerage services.  BitOoda makes no representations or warranties (express or implied) regarding, nor shall it have any responsibility or liability for the accuracy, adequacy, timeliness or completeness of, the Information, and no representation is made or is to be implied that the Information will remain unchanged. BitOoda undertakes no duty to amend, correct, update, or otherwise supplement the Information.​

The Information has not been prepared or tailored to address, and may not be suitable or appropriate for the particular financial needs, circumstances or requirements of any person, and it should not be the basis for making any investment or transaction decision. The Information is not a recommendation to engage in any transaction. The digital asset industry is subject to a range of inherent risks, including but not limited to: price volatility, limited liquidity, limited and incomplete information regarding certain instruments, products, or digital assets, and a still emerging and evolving regulatory environment. The past performance of any instruments, products or digital assets addressed in the Information is not a guide to future performance, nor is it a reliable indicator of future results or performance. ​

Ooda Commodities, LLC is a member of NFA and is subject to NFA’s regulatory oversight and examinations. However, you should be aware that NFA does not have regulatory oversight authority over underlying or spot virtual currency products or transactions or virtual currency exchanges, custodians or markets.​

BitOoda Technologies, LLC is a member of FINRA.​

“BitOoda”, “BitOoda Difficulty”, “BitOoda Hash”, “BitOoda Compute”, and the BitOoda logo are trademarks of BitOoda Holdings, Inc.​

Copyright 2023 BitOoda Holdings, Inc. All rights reserved. No part of this material may be reprinted, redistributed, or sold without prior written consent of BitOoda.​

We live in an increasingly digitized world where the demand for computational resources is surging. The blossoming renaissance in artificial intelligence technology is in the spotlight, leading the charge in demand for Compute. ChatGPT has burst onto the scene to become a household name, reaching 100mm users just two months after its launch. For reference, it took TikTok nine months and Instagram more than two years to reach the 100mm user mark. Not only has AI found instant product-market fit, but it has sparked a surge in the demand for Compute infrastructure – from hardware (GPUs) to software to cloud and datacenter services.​

Compute is a finite resource. In fact, as a16z, the leading VC fund, stated: “Compute capacity on specific hardware is a commodity.” Moreover, it is an expensive commodity. Today, many startup companies in the AI space are spending 80%+ of their total capital raised on Compute resources! ChatGPT itself is estimated to cost $700,000 per day to run! Nevertheless, it is suggested that demand for AI Compute outstrips the supply of Compute (available hardware, software, and cloud/datacenter setups) by a factor of 10 to 1. Clearly, despite high costs, the demand for Compute is increasing. ​​

Zooming out, how do we define this all-encompassing term we call “Compute” – or more specifically, “high performance compute”? And why is surging demand for Compute entering the fray now?​

”Compute” is not a foreign concept – it is just a concept that is abstracted away for most users into the devices they interact with on a daily basis. On the work front, people use word processors, spreadsheets, e-mail, and the Internet all day, on both work and personal computers and devices. On the personal front, people play video games, stream movies and TV, interact on social media, and browse the Internet – all these tasks can be done on PCs and smartphones).​

However, these consumer-grade tasks are fairly low-level and can be accomplished with standardized hardware setups (PCs/laptops/phones) and a decent Internet connection. High Performance Compute (HPC) refers to the set of industrial-grade applications that require more powerful computers and the accompanying infrastructure (hardware, software, power, bandwidth, data storage) to solve complicated tasks in cutting-edge industries – such as AI, graphics rendering, computational biology, high frequency trading, and BTC mining, among other applications.​

Where consumer-grade Compute tasks can be accomplished with minimal infrastructure buildout, the HPC space has resulted in a flourishing ecosystem of infrastructure players, including data centers, cloud service providers, hardware and software players, power operators. These players represent the “Compute Stack” that will evolve into a more robust infrastructure ecosystem. A tertiary effect of the increased energy requirements from a growing Compute sector could be the use of sustainable, renewable energy to power energy-intensive parts of the Compute Stack, such as data centers and cloud service providers.​

The rise of Compute represents a significant trend that is larger than any one industry – whether AI, blockchain, biology, or rendering. We believe that in the end, all roads will flow through Compute, and Compute will be the superset of the high-growth industries. Finally, we conclude that Compute presents a positive-sum ecosystem for all players involved – helping improve access to consumers, optimizing business infrastructure for suppliers, and resulting in a greener, renewable grid for all.​

​What is High Performance Compute?

In the traditional model for computing, consumers (individuals, organizations) would use their own computer hardware installed locally (in their home or in an office) to perform tasks. These tasks were retail-grade and included simple, small-scale applications like the following: ​

  • Data Processing / Spreadsheets​
  • Internet Browsing / Social Media​
  • Video Streaming​
  • Gaming​

High Performance Compute tackles advanced tasks that require industrial-grade computers. While a single consumer-grade PC setup has limited capacity, high performance compute is used to solve complex problems, conduct advanced simulations, or perform large-scale data processing tasks that are beyond the capabilities of a single computer or workstation.​

To achieve this, high performance compute systems consist of clusters of powerful computers or supercomputers, which are interconnected via high-speed networks to provide increased processing capacity and computational efficiency. This can be done using datacenters and the cloud.​

High Performance Compute refers to the ecosystem of industrial-grade applications that require powerful computers and the accompanying infrastructure (hardware, software, power, bandwidth, data storage) to solve complicated tasks in cutting-edge industries, including (but not limited to):

  • Artificial Intelligence​
  • Graphics Rendering​
  • Computational Biology​
  • High Frequency Trading​

In a world with increasingly complex applications (training self-driving cars, more immersive gaming, using ChatGPT), the demand for High Performance Compute (which we refer to as “Compute” or “HPC” in this report) and its infrastructure is likely to rapidly increase.​

Supply and Demand for High Performance Compute

  • While individual, consumer-grade computers can operate in a closed system – plug a computer into an outlet at home and connect to the Internet to perform basic tasks – HPC refers to a more comprehensive ecosystem of infrastructure.​
  • While HPC can take place on-premise (for large corporations or consumers that can set up their own architecture), the Compute space has been dominated by Cloud Service Providers (AWS, Google Cloud, Microsoft Azure), as well as public datacenter players (used by the CSPs as well as individual operators such as Coreweave).​
  • The Compute ecosystem is also comprised of (1) hardware manufacturers for processor chips (such as GPUs, FPGAs, and ASICs), hard drives (data storage), and broadband (uninterrupted, reliable connectivity); (2) software vendors; and (3) system integrators to connect these various Compute pieces.​

  • In the next section, we will explore the HPC demand side, which requires increasingly intensive Compute infrastructure.​

Applications of High Performance Compute

Compute Applications - Artificial Intelligence

  • The most high-profile demand driver for Compute resources comes from the ongoing rise in Artificial Intelligence. The growth of AI is entirely constrained by Compute infrastructure, and estimates suggest that the demand for AI Compute exceeds supply by a factor of 10x. ​
  • Estimates suggest that ChatGPT costs $700,000 per day to run, or $0.36 per query. Compute costs are by far the largest cost for AI companies, with VC-funded AI companies using up to 80% of their total capital raised for Compute resources.​
  • Why are Compute costs for AI so high? The underlying algorithms, which need to be run on HPC infrastructure, are much higher than normal consumer-grade computing tasks.  For example, training ChatGPT’s LLM (large language models) to generate a single word for ChatGPT is incredibly Compute-intensive.​

Figure: AI Applications
Source: https://a16z.com/2023/04/27/navigating-the-high-cost-of-ai-compute/https://www.windowscentral.com/microsoft/chatgpt-costs-dollar700000-per-day-to-run-which-is-why-microsoft-wants-to-make-its-own-ai-chips

Compute Applications - Graphics Rendering

  • Compute is instrumental to graphics rendering by enabling the efficient processing and generation of images or videos from 3D models, scenes, and other data. In rendering, Compute is primarily used in the form of performing complex calculations required to convert 3D models and scenes into 2D images or animations that can be displayed on screens.​
  • In a world of increasing digitization (Netflix economy, increased animation, moving to a ”metaverse,” and the move toward augmented and virtual reality), Compute requirements for graphics are likely to increase.​
  • As a case study, Pixar has its own “render farm” – a supercomputer (top 25 globally by size) consisting of 2,000 machines and 24,000 cores, and it still took 2 years to render the film “Monsters University.”​
  • Pixar opened access to its render Compute in a service called “RenderMan.”​

Figure: Pixar's Renderman Platform
Source: https://renderman.pixar.com/product, https://nofilmschool.com/why-pixars-24000-core-supercomputer-still-takes-24-hours-render-each-frame

Compute Applications - Computational Biology

  • Computational biology augments the field of biology with software and technology to use mathematical models, computational techniques, and algorithms to analyze and understand biological systems. As a result, computational bio harnesses Compute to handle complex calculations and simulations, to enable healthcare companies and researchers to study biological systems at from micro to macro scales. ​
  • AlphaFold, developed by Google’s Deepmind, is designed to predict the three-dimensional structure of proteins with remarkable accuracy using a  Compute-intensive algorithmic process. AlphaFold has made a significant impact on the field of structural biology, as determining protein structures has traditionally been a slow, expensive, and labor-intensive process.​
  • Running AlphaFold requires GPUs and other non-consumer grade hardware.​

Figure: Alphafold
Source: https://www.deepmind.com/research/highlighted-research/alphafold, https://pubs.acs.org/doi/10.1021/acs.jpcb.2c04346

Compute Applications - High Frequency Trading

High frequency trading is a type of algorithmic trading in which financial instruments are bought and sold at extremely high speeds, often within fractions of a second. Compute infrastructure therefore plays a vital role in enabling HFT strategies, as it provides the necessary computational power and speed to execute trades and analyze market data rapidly. Applications for Compute within the high frequency trading realm include:​

  • Low-latency trading: minimizing the time between receiving market data and executing trades​
  • Data processing: ingesting large amounts of data and running algorithms for trading signals​
  • Colocation: setting up trading servers next to exchange servers to minimize distance and maximize speed of info transmission​
  • Hardware acceleration: using GPUs, FPGAs, and advanced hardware to gain an edge over competitors​

Figure: High Frequency Trading
Source: https://www.quora.com/Is-algorithm-trading-same-as-high-frequency-trading

The Compute Infrastructure Landscape

Who Are the Players?

  • Compute infrastructure requirements are complex, resulting in the growth of several sub-industries that play a part in the overall Compute ecosystem. ​
  • These players can be virtual (software) or physical (hardware). Ultimately, as each sub-industry grows, the entire “Compute Stack” can be disintermediated over time to counter a monopolistic or oligopolistic structure.​
  • As each player below competes on price, efficiency, and scalability, the winners of this race will be the end consumer, and the losers will be the large incumbents as competition drives down price and margins.​
  • In this section, we will explore different players in the “Compute Stack” in detail: what role they play in the ecosystem, example companies, and example products.​

Compute Infrastructure - Data Centers

  • Data centers are transforming from real estate and infrastructure plays into optimizable growth assets that can be viewed as “Compute refineries.”​
  • This “Compute refinery” model implies that data centers can optimize their operations for lowest cost (via best hardware, highest bandwidth, most reliable storage, and lowest power cost) and then provide their operational infrastructure to cloud service providers and other end consumers.​
  • Data centers are classified by tiers (categorized by uptime), with Tier I being the most basic (99.671% uptime, lowest redundancy) and Tier IV being the highest “fault-tolerant” model (99.995% uptime, highest level of redundancy). Different HPC consumers will have differing uptime needs.​
  • Many AI applications (particularly model training) do not require a high level of uptime, low latency, or redundancy and are currently overpaying in a resource-constrained environment. This is bottlenecking both AI growth and compute availability and can be better optimized.

Figure: Physical Data Center
Source: https://datacenterlocations.com/equinix/

Compute Infrastructure - Cloud Service Providers

  • Cloud Service Providers (CSPs) are today’s front-facing brands for Compute. The largest players – Amazon Web Services, Microsoft Azure, Google Cloud – are the dominant players in the space, and they are accelerating their lead by expanding into additional verticals.​
  • For example, the largest CSPs have vertically integrated to run their own data centers (rather than sharing data center capacity) and are even expanding into application-level integration (e.g., Microsoft’s $10+ bn investment into OpenAI to integrate ChatGPT into its services). ​
  • CSPs also have a software component, with front-end access points for consumers to manage their cloud compute sessions.​
  • Ultimately, we could see smaller entrants into the CSP space (e.g., CoreWeave) spark increased competition, benefiting the end users.​
Figure: CSPs
Source: Various Company Logos

Compute Infrastructure - Hardware Players

  • One of the main bottlenecks for the growth of Compute availability lies in the scarcity of hardware supply and the resulting hardware acceleration race to increase efficiency in Compute processes.​
  • While consumer-grade personal computing can be achieved on CPUs within a PC, large scale applications (e.g., AI) require more advanced hardware (GPUs, FPGAs, ASICs) run in parallelized setups with optimized power, uptime, and maintenance. This requires both a data center to physically hold all the components, and more importantly, the chips and processors.​
  • Hardware players such as Nvidia and AMD are the bellwether producers of GPUs and are developing next-generation solutions specifically tailored for applications like AI. FPGA and ASIC players are also joining the race.​
Figure: Hardware Players
Source:  https://www.nvidia.com/en-us/data-center/a100/

Compute Infrastructure - Software Players

  • Compute infrastructure continues to shift to modern platforms that utilize a cloud operating model, with a significant emphasis on cloud system software and containers.​
  • With cloud-based software, users can quickly deploy new applications and updates, reducing the time-to-market for new products and features.​
  • Software in the cloud can be easily scaled up or down as per the user's demand without having to worry about investing in additional hardware.​
  • These technologies were accelerated during the pandemic. Coupled with the shift to remote work, they are quickly becoming the standard for modern enterprise IT.​

A logo of a cloudDescription automatically generated with low confidence
A blue and white logoDescription automatically generated with medium confidence
Figure: Software Players in the Compute Stack
Source: Various Company Logos

Compute Infrastructure - Power Players

  • Although many sources have identified that demand for Compute is surging, the importance of power players is rarely emphasized. Estimates suggest that ChatGPT consumed as much electricity as 175,000 people in January 2023 alone! As the new high performance compute industry surges, the associated power requirements will also surge, and along with that will come a regulatory focus.​
  • Just as BTC mining presented an opportunity to plug into renewable energy sources and spark a green energy initiative, the same trend could occur (on a much larger scale) with the Compute ecosystem. Indeed, Compute is the superset of BTC mining and various other applications, and with increased power consumption comes the opportunity to facilitate more sustainable, renewable energy generation for the blossoming Compute industry. ​

Figure: Power Players in Compute
Source: https://www.bloomberg.com/news/articles/2023-03-09/how-much-energy-do-ai-and-chatgpt-use-no-one-knows-for-sure#xj4y7vzkg

Compute Infrastructure - BitOoda's Role in the Compute Ecosystem

As Compute transforms into a commodity, BitOoda has developed a flywheel of products and services designed to help clients gain access to the broad spectrum of resources and market solutions across the Compute landscape.​

BitOoda can help BTC and previous ETH miners looking at purpose-built data centers optimize their assets and strategies for Compute.

Conclusions: The Great Convergence

The evolution of the Compute ecosystem is well underway. Demand for Compute is accelerating from applications such as AI, rendering, computational biology, finance, and blockchain. Meanwhile, the supply of Compute, while constrained, is evolving into a “Compute Stack” consisting of data centers, cloud service providers, hardware and software providers, and power producers.​

Our key themes to watch in the Compute space include:​

1. All Roads Flow Through Compute

In an increasingly digitized world, the demand for Compute power is only increasing. Whether from new industries like AI or established industries like finance, the common denominator is Compute. Additionally, Compute has evolved beyond the consumer-grade, personal computer capacity; data centers will evolve into “Compute refineries.” ​

2. Compute is the Ultimate Growth Industry

The highest growth industries in the digital world will be powered by Compute, just like the growth engines of the industrial world were powered by oil / fuel. Indeed, Compute is transforming into a digital commodity and is the underlying fuel for growth of new sectors.​

3. The Great Convergence: 1 + 1 = 3

Many industries operate in zero sum game environments. We believe the growth of Compute will result in a positive sum outcome, benefiting the entire “Compute Stack” – from the data centers, to the cloud service providers, to the hardware/software players, and ending with the consumer. Additionally, why do we think Compute and power are a perfect marriage? The mandate for renewable assets is growing alongside the growth of Compute, and Compute can help facilitate a global energy transition toward renewable, sustainable power.​

Risks to the Rapid Growth of Compute

Like any new industry that has significant impact on consumers, corporations, and the broader infrastructure ecosystem, the growth of Compute will present risks that must be understood and addressed:​

Downside Risks

  • Regulatory Concerns – if the Compute space continues to consolidate into a monopolistic or oligopolistic structure, antitrust regulators may step in and require dispositions or put limits to the power of the large, dominant cloud service providers. The mitigant is for the decentralization of the Compute Stack to address concentration risk.​
  • Environmental Impact – increasing demand for Compute resources translates into increased demand for natural resources (such as power, materials for hardware development, and real estate). Without climate-aligned growth, this could have a negative environmental impact.​
  • Privacy / Data Security the disintermediation of the Compute Stack into a wider range of players (beyond the large public players) could introduce security and privacy concerns for users. This can be mitigated by privacy and security regulations for firms in the Compute industry.​

Upside Risks

  • Regulatory Integration – just like the financial industry has been regulated to separate different players in the “financial stack” (broker dealer, ATSs, market makers, etc.), regulatory integration could help the ”Compute Stack” flourish and benefit each sub-industry (data centers, hardware providers, etc.).​
  • Decentralized Market – an increasingly disintermediated market will help (1) reduce concentration risk and (2) increase competition for more players to participate in the growing Compute space.​
  • Accelerating Feedback Loops – with a more efficient Compute Stack, the innovation flywheel could accelerate with demand drivers like AI and computational biology combining to deliver new products (novel new drugs, etc.). This would reinforce the value of Compute and further drive demand.​
  • Reducing Redundancy – a mature “Compute Stack” would result in lower need for redundancy, which could ease the power and resource requirements for Compute users and could ultimately result in a greener grid as well as more efficient Compute markets.​

Disclosures

Purpose

This research is only for the clients of BitOoda. This research is not intended to constitute an offer, solicitation, or invitation for any securities and may not be distributed into jurisdictions where it is unlawful to do so. For additional disclosures and information, please contact a BitOoda representative at info@bitooda.io.​

Analyst Certification

Vivek Raman, the primary author of this report, hereby certifies that all of the views expressed in this report accurately reflect his personal views, which have not been influenced by considerations of the firm’s business or client relationships.​

Conflicts of Interest

This research contains the views, opinions, and recommendations of BitOoda. This report is intended for research and educational purposes only. We are not compensated in any way based upon any specific view or recommendation.​​

General Disclosures

Any information (“Information”) provided by BitOoda Holdings, Inc., BitOoda Digital, LLC, BitOoda Technologies, LLC or Ooda Commodities, LLC and its affiliated or related companies (collectively, “BitOoda”), either in this publication or document, in any other communication, or on or through http://www.bitooda.io/, including any information regarding proposed transactions or trading strategies, is for informational purposes only and is provided without charge.  BitOoda is not and does not act as a fiduciary or adviser, or in any similar capacity, in providing the Information, and the Information may not be relied upon as investment, financial, legal, tax, regulatory, or any other type of advice. The Information is being distributed as part of BitOoda’s sales and marketing efforts as an introducing broker and is incidental to its business as such. BitOoda seeks to earn execution fees when its clients execute transactions using its brokerage services.  BitOoda makes no representations or warranties (express or implied) regarding, nor shall it have any responsibility or liability for the accuracy, adequacy, timeliness or completeness of, the Information, and no representation is made or is to be implied that the Information will remain unchanged. BitOoda undertakes no duty to amend, correct, update, or otherwise supplement the Information.​

The Information has not been prepared or tailored to address, and may not be suitable or appropriate for the particular financial needs, circumstances or requirements of any person, and it should not be the basis for making any investment or transaction decision. The Information is not a recommendation to engage in any transaction. The digital asset industry is subject to a range of inherent risks, including but not limited to: price volatility, limited liquidity, limited and incomplete information regarding certain instruments, products, or digital assets, and a still emerging and evolving regulatory environment. The past performance of any instruments, products or digital assets addressed in the Information is not a guide to future performance, nor is it a reliable indicator of future results or performance. ​

Ooda Commodities, LLC is a member of NFA and is subject to NFA’s regulatory oversight and examinations. However, you should be aware that NFA does not have regulatory oversight authority over underlying or spot virtual currency products or transactions or virtual currency exchanges, custodians or markets.​

BitOoda Technologies, LLC is a member of FINRA.​

“BitOoda”, “BitOoda Difficulty”, “BitOoda Hash”, “BitOoda Compute”, and the BitOoda logo are trademarks of BitOoda Holdings, Inc.​

Copyright 2023 BitOoda Holdings, Inc. All rights reserved. No part of this material may be reprinted, redistributed, or sold without prior written consent of BitOoda.​

Related Research