Compute

Will the AI Boom Create Opportunities in Financial Markets? 

BitOoda Compute Research, 09/18/2023

Vivek Raman
Key Takeaway #1

Although AI is not a new area of research, the AI boom began in earnest with ChatGPT and is a secular (versus cyclical) trend.

Key Takeaway #2

The application layer of AI is vast, ranging from “static” AI applications to “dynamic” AI applications.

Key Takeaway #3

The infrastructure part of the “AI stack” is currently the bottleneck for widespread AI proliferation. Hardware, software, and overall, “compute” as the underlying fuel for AI is in scarce supply.

Key Takeaway #4

Financializing the compute ecosystem will democratize access of AI resources to the broader community.

Premium Content

Unlock exclusive insights with our cutting-edge digital finance platform. Gain access to next-gen data analytics and digital asset products crafted with applied science. Subscribe now to stay ahead of the curve.

  • Research and Consulting
  • Investment Banking and Advisory
  • Sales and Origination
  • HPC and Power Advisory
Request Access Now!

On November 30, 2022, OpenAI boldly launched ChatGPT – a user-friendly, interactive, front-end application for its flagship AI LLM product – sparking a sudden and monumental paradigm shift that opened the floodgates for AI proliferation.

Although artificial intelligence is by no means a new area of research and innovation, the Cambrian explosion for mass adoption was catalyzed by the launch of ChatGPT. Indeed, ChatGPT took the somewhat abstract and nebulous concept of AI and repackaged it into a universal consumer-grade application. Its success was unparalleled: ChatGPT took a mere two months to reach the 100 million user mark. For context, it took the wildly popular TikTok app nine months and Instagram 2.5 years respectively to reach the 100 million user mark. It is not an exaggeration to say that ChatGPT was the “Google” or ”iPhone” moment for unlocking AI for consumer-grade and institutional-grade applications.

What are some of the novel applications that the proliferation of LLMs (large language models) create? We separate these into “static” and “dynamic” AI applications. ”Static” AI applications include chatbots and image generation; “dynamic” applications include AI agents.

After exploring the design space for AI applications, we turn to the major infrastructure players of the emerging AI industry. These include the current incumbents in the LLM space, such as OpenAI and Meta’s LLaMA. We also look at the ”middleware” layer, such as Langchain, which offers a toolset for anyone to customize and train their own AI applications, sparking a new set of innovation potential at the startup app level and democratizing access to the LLMs via API integration.

While the application layer will be the most visible and prominent result of the AI boom, the infrastructure layer is arguably just as important. Digging deeper into the infrastructure part of the “AI stack,” we arrive at the “digital commodity” that fuels the AI industry – which we know as “compute.” (For an overview of compute, see our 5/12/23 report titled “High Performance Compute: A Primer” and our 6/2/23 report titled “The Compute Stack.”)

Although AI applications seem magical on the surface and will spark disruption across all industries, running AI applications is not free. The compute cost is currently the constraining factor for AI growth and is catalyzing a modern-day Gold Rush for compute providers to accumulate GPU infrastructure.

NVIDIA is currently the kingmaker of the GPU industry and provides the most cutting-edge, AI-configured GPUs (the H100 and A100 models). This has resulted in a massive supply-demand imbalance, where the demand for computing resources is seemingly infinite, while the supply of compute hardware is constrained.

We believe this supply-demand imbalance is a product of the early part of the AI boom, and as AI applications, software, and infrastructure mature, there will be novel ways of democratizing access to compute. One way to do so is to introduce financial markets for the AI compute ecosystem.

Ultimately, we conclude that the financialization of compute as the commodity powering AI is inevitable, and we explore financial markets opportunities arising from the AI revolution. These opportunities range from financing and lending facilities using GPU collateral, to standardizing compute contracts, to establishing a secondary market for compute, and to expanding existing compute infrastructure into a decentralized, democratized system that is accessible to the broader community.

“Static” AI Applications

Initial Applications from the AI Revolution

The first foray of widespread AI applications is based on LLMs (large language models) – this is due to the resounding success and immediate product-market fit of ChatGPT.

Large language models are one of the many subsets of the artificial intelligence realm in which models are “trained on,” or fed, vast amounts of text data to learn patterns of natural language and therefore generate robust, human-like text.

While the previous version of chat apps, known as chatbots, were much simpler – programmed to respond to specific inputs and output pre-set responses – LLMs like ChatGPT use machine learning to generate responses. Therefore, rather than being programmed to output specific responses, ChatGPT “learns” based on the text data it is trained on and therefore generates more complex, varied responses.

Nevertheless, LLMs are the tip of the iceberg of AI’s potential, as LLMs are trained from “static” data sets to produce ”static” AI applications, which then need to be re-trained or fed more data to continue ”learning.”

In this section, we will explore other ”static” AI applications, which include:

• LLMs for chatting, e.g., ChatGPT

• LLMs for Image generation, e.g., DALL-E

• Video / music generation, e.g., Replicate

“Static” AI Applications

Chat Platform

• ChatGPT represents the “iPhone” or “Google” moment for AI. Although AI research has been ongoing for decades, the true consumer product-market fit was resoundingly achieved when OpenAI released ChatGPT.

• After ChatGPT, other AI-powered chat platforms joined the fray, including Google’s Bard and Anthropic’s Claude.

• Chat platforms, powered by LLMs, change the way humans can interact with knowledge. Instead of manually aggregating data and information from a variety of sources (e.g., the “Google search method”), users can simply ask an AI chat platform a question. The platform, using a trained LLM, will aggregate data from its training and emit a human-readable response.

• While chat platforms are the foundational building block for several AI applications, they do represent a better “Google” in their static form and have plenty of potential to grow their functionality and offering.

“Static” AI Applications

Image Generation

• The power of AI extends beyond text to image generation – unlocking a whole new dimension of content generation. Tools like DALL-E, developed by OpenAI, can take a short text prompt and output a series of images that depict the text (see example below).

• How does image generation AI work under the hood? Instead of using OpenAI’s GPT (used for chat AI by ChatGPT), DALL-E uses an OpenAI model called CLIP – or Contrastive Language-Image Pre-training.

• CLIP is trained with hundreds of millions of images along with their captions so that it can learn how text and images correspond. This training allows the CLIP model to learn how related a caption is to an image.

• DALL-E then uses a diffusion model to generate an image based on CLIP text embeddings, ultimately linking the text to images and then generating the photorealistic image in a seemingly magical feat.

“Static” AI Applications

Video / Music Generation

• This combination of image training (CLIP) and image generation (diffusion) can be extended from static pictures to AI-generated videos and even AI-generated music.

• The music-gen model from Meta’s AI research lab can take a text input (like DALL-E or ChatGPT) and output a music clip representing the text.

• The implications of these static AI applications – from chat applications to AI image generation to AI music / video generation – are vast. The companies that could be displaced span across industries and across employee levels.

• Any sort of low-level content writing, or research and aggregation, could be swiftly replaced by chat apps like ChatGPT. Art could be massively disrupted by tools like DALL-E, and the entire entertainment sector could be reshaped using AI-generated content (AI scripts, AI videos, AI voice / music).

“Dynamic” AI Applications

How Can AI Applications Expand Beyond “Static” AI?

The potential of “static” AI applications alone is vast and can reshape many industries, from writing to art and film to the “search engine” model made ubiquitous by Google. Much of day-to-day communication, sales, document analysis, and legal work could potentially be automated via static AI applications.

However, this is merely the tip of the iceberg. What if, instead of using just historical data sets to train AI models, AI could interact with everyday real-time data and actually perform tasks? Then, instead of being a souped-up version of Google’s “I’m Feeling Lucky” button, what if AI could dynamically analyze and execute instructions?

"Dynamic” AI applications, which make technology start to look like science fiction, include the following:

• Fine tuning (customizing LLM models)

• Plug-ins (interacting with live data)

• Code interpreter (an AI analyst)

• Function calling (programming AI to execute tasks)

• AI Agents (e.g., BabyAGI or semi-autonomous AI)

• Neural Networks (neural nets) for ”computer vision” (e.g., Tesla Full Self-Driving)

“Dynamic” AI Applications

Fine-Tuning

• Most people who have used ChatGPT notice that there is a knowledge cutoff date of September 2021. This means that the “knowledge” – i.e., the text and data – fed to train the GPT model is only current through 2021. While this is plenty of data for historical inquiries, it does create a limitation where ChatGPT cannot interact with more current or real-time data.

• If ChatGPT, DALL-E, and video/music generation were completely static and could only respond with data before 2021, the innovation would still be monumental. However, models like GPT are customizable via a process called “fine-tuning.” Fine-tuning allows for users to retrain, or feed additional data and information, to the existing models to achieve more customized, more current, and more efficient outputs.

• Fine-tuning makes static AI models dynamic and unlocks vast AI potential.

“Dynamic” AI Applications

Plugins

• ChatGPT plugins are an example of fine-tuning ChatGPT to integrate real-time dynamic functionality. Plugins were the first major dynamic AI application to flourish right after the release of ChatGPT.

• Plugins are third-party extensions that build upon ChatGPT to extend functionality – via accessing realtime information, running computations on external data (not just the data the ChatGPT is trained on), or interacting with third-party services like websites.

• Plugins start to unlock the potential of dynamic AI applications. Some examples of plugins include: a plugin on the Kayak or Expedia website that answers real-time travel questions (vs. ChatGPT that would be limited to a knowledge base from 2021). A “summarizing” plugin can take content from any website and create a bullet point summary. Other examples of plugins, such as “AskYourPDF,” in the ChatGPT plugins store are shown below.

“Dynamic” AI Applications

Code Interpreter

• OpenAI Code Interpreter is one of the most consequential plugins that enables ChatGPT to do two things: (1) run code in a conversational manner, which is important for all types of data analytics, and (2) access files uploaded to Code Interpreter.

• This is a very powerful combination. In the traditional workflow, an example task would be for a human to take an Excel sheet full of data and then format it, analyze it, and produce outputs (graphs, tables).

• Code Interpreter eliminates the human element. Instead, a human can upload the Excel sheet, give instructions to ChatGPT to form analyses and conclusions about the data, and ask ChatGPT to output the relevant graphs and tables.

• Code Interpreter is a game changer – it could automate the creation of PowerPoints, Excel sheets, and ultimately replace the “analyst” role.

“Dynamic” AI Applications

Function Calling

• OpenAI Function Calling is where the true power of dynamic AI applications begins to shine. The current iteration of ChatGPT’s static AI is, in essence, a souped-up version of Google. It produces high quality text outputs but still requires humans to provide prompts and act on information. What if, in addition to providing a text output, ChatGPT could also execute functions and perform tasks on behalf of users?

• This is what Function Calling achieves. For example, instead of ChatGPT simply composing the text as an email response, Function Calling allows ChatGPT to both compose the response and send the email without human interaction. Functions bring programmability to ChatGPT and allow it to interact with the outside world. Ultimately, function calling could lead to ChatGPT making Amazon purchases or booking flights on behalf of users – it is the first step to true autonomous AI utility.

“Dynamic” AI Applications

AI Agents

• The ultimate endgame of AI is the development of AGI - ”artificial general intelligence” – whereby AI can dynamically learn and execute on its own without human intervention. As we can see from many of the dynamic AI tasks described in this section, introducing plugins, code interpreter, and function calling are steps toward achieving autonomous AGI.

• One intermediate step toward AGI is the concept of an “AI Agent” – which is an AI-powered software program that interacts with its environment to achieve specific goals without continual human interaction. AI Agents provide a natural evolution to AGI as they act increasingly autonomously.

• “BabyAGI,” one implementation of an AI agent, is an example of an intermediate step. By recursively using function calls and plugins to interact with its environment, BabyAGI can act as an AI agent and complete series of tasks semi-autonomously, dynamically reacting based on the results.

“Dynamic” AI Applications

Neural Net

• It would be incomplete to outline dynamic AI applications and not mention Tesla’s involvement. Tesla has been on the cutting edge of AI for years, with the specific focus of using AI to instruct cars to ultimately be fully autonomous. In effect, Tesla created software acting like AI agents for cars.

• To achieve full self driving, Tesla uses the well-known AI structure called a “neural network,” which is a foundational superset of all AI models (including LLMs). Set up much like the human brain, a neural net is composed of layers of interconnected nodes (neurons) that process data to produce output. While GPT’s neural nets are trained using text, Tesla’s neural nets are designed for computer vision tasks, like object detection.

• Tesla’s neural nets, when applied to the full self driving cars, allow cars to process data from cameras (computer vision) and make decisions about driving – avoiding obstacles, changing lanes, detecting other cars, etc.

AI Landscape: Providers and Infrastructure

Who Are the Players?

The “AI stack” consists of players that maintain, train, and release models (e.g., LLMs), as well as the back-end hardware and infrastructure required for the “compute” that fuels AI applications.

We will examine the compute ecosystem in depth in the next section; in this section, we examine some of the early leaders in providing AI models. These include OpenAI, the parent of ChatGPT, as well as competing models from Facebook, Google, Anthropic, Tesla, and more.

We also provide a brief overview of the software tooling that enables everyday access to these AI models via API connectivity, which unlocks the potential of AI application proliferation.

AI Landscape

OpenAI

• While ChatGPT was the hyperscaling moment for AI applications, there have been decades of foundational work going into the field of artificial intelligence. Many contenders were working on AI and set up as R&D companies – OpenAI was one of them. The ultimate goal of these AI R&D companies is the same – to converge upon a safe, ethical form of artificial general intelligence that ushers a new technological Renaissance. The stakes, however, are high – plenty of dystopian futures also exist where AGI becomes fully autonomous and usurps human moral constructs.

• OpenAI, the parent company of ChatGPT, has several product lines. ChatGPT Enterprise was recently announced to provide more robust and customizable LLMs for companies, along with enterprise-grade security.

• Other OpenAI products include DALL-E (image generation), and a robust API for developers to access and build applications integrating GPT / DALL-E.

AI Landscape

Meta AI - Llama

• While ChatGPT is currently the household name for consumer-grade AI, it is by no means the only game in town. Meta has been a behemoth in the AI space and has released its fully open-source, downloadable, and customizable LLM called Llama.

• Llama, like GPT, is pretrained on a vast data set and can be fine-tuned and customized in a similar manner.

• Llama also has a code-specialized version called Code Llama, which generates programming code in many languages (Python, Javascript/Typescript, Bash, C++) from human prompts. This helps democratize access to coding and can increase engineering productivity across all software organizations.

• Meta’s foray into LLMs extends beyond code and chat applications to image and video generation, as seen in the previous section.

AI Landscape

Google - Bard

• As the AI frenzy gained steam in late 2022 / early 2023, Google was late to the party. However, it is now becoming a prerequisite (rather than just an optional enhancement) for the large technology companies to both integrate and innovate in the AI sector.

• Google’s response to ChatGPT and Llama was its own AI chat application called Bard. Bard is powered by Google’s own LLM called PaLM 2.

• Ironically, Google went from being very early to lagging in the AI space. Google had built its own neural net architecture in 2017 called Transformer, which OpenAI may have also integrated in its development of ChatGPT. However, ChatGPT functionality leapfrogged Transformer, causing Google to upgrade to its own model.

• Microsoft taking a stake in OpenAI forced Google to accelerate its AI push. In the end, increased competition results in better consumer AI products.

AI Landscape

Anthropic - Claude

• While we have covered the behemoth AI providers (OpenAI / Microsoft, Llama / Meta, Bard / Google), there are many other contenders in the AI R&D space that are not part of the FAANG company umbrella.

• One such competitor is Anthropic, another R&D company developing AI systems and building toward the ultimate AGI singularity. Anthropic was founded by former members of OpenAI and remains privately funded.

• Anthropic’s flagship consumer-grade LLM product is called Claude and is an AI chat app similar to ChatGPT, Bard, and Llama. Claude is accessible via API, has a built-in code generator, and has an additional prime directive toward safety and appropriate (harmless) responses. Ethics will inevitably be a major existential debate as AI blossoms, and Anthropic is tackling this now.

AI Landscape

Hugging Face

• We have explored enough permutations of LLMs / AI chat applications to get a cross section of the major players. Now let’s look at different infrastructure tools for AI applications – one of which is Hugging Face.

• Hugging Face embodies the aggregator model by building a hub for all AI players to build, train, and deploy their own custom models. Just like GitHub is a shared public repository for all developers (retail, corporate, enthusiast) to publish and share their code, Hugging Face is like the “AI GitHub” for AI builders to share their models.

• Hugging Face has aggregated an impressive toolbox of over 120,000 models, 30,000 data sets, and 50,000 demo apps (called Spaces). These are all publicly available and can be downloaded and tested. Individuals as well as the major AI titans like Microsoft, Google, and Meta use the tools and repositories available on Hugging Face.

AI Landscape

Tesla / xAI

• We briefly touched on Tesla’s foray into AI development – mostly by fusing AI software with hardware. Whether it is training cars to drive autonomously (Tesla FSD) or building robots that act and perform tasks using AI software, Tesla has been at the cutting edge of AI.

• Elon Musk is famous for doubling down on his bets – and it is no surprise that he announced the creation of xAI, a nebulous but likely cutting-edge new AI-focused operation meant to “understand the true nature of the universe.” While the details are sparse, Musk / Tesla have been acquiring NVIDIA H100s in bulk to set up the necessary “compute factory” to build xAI into an R&D behemoth.

• Ultimately, Musk plans to integrate the AI capabilities developed by xAI into his Twitter/X “super app” – hence weaving social media, fintech, and AI into one grand vision as per his brand.

AI Landscape

LangChain

• Finally, we conclude the AI application infrastructure landscape with a “middleware” tool. One potential issue that could arise given the proliferation of LLMs across various companies and standards is that connecting and accessing each independent AI model is difficult.

• This is where a tool like LangChain comes in. AI development, like all development, will likely be accelerated if it is open source where individuals can build customized applications and businesses on top of existing models.

• However, how can developers easily connect to the OpenAI API, the Llama API, and the Anthropic API, and integrate smaller models from Hugging Face? LangChain developed a type of “wrapper aggregator” API that allows for quicker, seamless access to many AI models.

• LangChain supports both Python, the de facto programming language of AI, and Javascript, the de facto programming language of the Internet.

Compute: The AI Commodity

What is the Economic Model Driving AI?

We’ve explored some of the major providers of AI models and applications – this could be considered the “front-end” of the AI stack. However, beneath this front-end is a vast infrastructure layer comprising the “back-end” of the AI stack which drives the high cost of using AI.

Some of these back-end AI costs include:

Training: feeding data to a base AI model (e.g., an LLM) so it is ready for consumer user. As a reference point, some estimates for training GPT-3 (one of the models powering ChatGPT) range from $500,000 to $4.6 million.

Inference: querying a pre-trained model (like ChatGPT) with a prompt or inquiry. Inference is much cheaper than training.

External Infrastructure: instead of setting up custom GPU data centers, most AI consumers will use existing infrastructure managed by large cloud service providers (CSPs) and pay the CSP a premium to the raw compute cost.

Hardware: other AI operators will elect to purchase and stand up their own AI hardware setup. Although the Holy Grail for AI is currently GPUs (specifically, NVIDIA’s H100 GPUs), there is also tertiary hardware infrastructure required, such as InfiniBand and NVLink (also provided by NVIDIA) to connect GPUs within a data center.

Power: while the spotlight for AI costs has focused on GPU hardware, power is the underlying commodity driver necessary for data centers to successfully operate. While power costs are currently negligible compared to GPU hardware costs, this may change as AI hardware becomes more efficient (mirroring the Bitcoin mining industry, where power is a significant driver of profitability).

All of these costs can be aggregated to represent computational power, or “compute.” As such, compute is the underlying commodity that powers AI.

We will explore each of these “back-end” AI costs in more detail and then conclude with the following question:

Can innovative new financial markets products unlock capital and better compute pricing for the AI industry?

Compute: The AI Commodity

Model Training and Inference

Let’s explore training and inference in more detail – as both operations are the lifeblood of AI models and form the drivers of the currently high-cost structure of AI compute.

• Training a model refers to feeding a model data, which it uses to formulate responses (whether text, image, video, or directions to send to a FSD car).

• Inference refers to the act of querying a pre-trained model to get the actual response.

• Training a transformer model (e.g., an LLM) takes three times as long as doing inference. However, since training data is much larger (estimated at 300 million times larger) than inference data (which typically consists of a prompt), training takes around a billion times longer than inference.

• The table below shows the compute requirements for different LLMs along with the size of the training data (vs negligible word size for inference prompts). The key takeaway is that to do a training or inference job on one consumer-grade GPU would be impossible. This is why an entire industry is being built around compute infrastructure – which requires entire data centers of GPUs that are parallelized and connected to one another to handle the massive computational load demanded by the AI industry.

Compute: The AI Commodity

Cloud Service Providers (AWS, Google, Microsoft)

• The natural outcome from having a very constrained commodity (compute) that is in high demand is for incumbent players with existing infrastructure to thrive. Indeed, when considering buying a fleet of top-of-the-line GPUs (which are massively backlogged) vs renting the GPUs from a company with available compute capacity, most AI players choose the latter.

• Although the retail AI boom is new, AI is not; the large players (Cloud Service Providers, or CSPs) such as Amazon, Google, and Microsoft have been investing in hardware infrastructure for years and therefore have been the go-to distributors of compute for AI companies.

• The largest providers with the most capacity are able to set the market price, and therefore players like AWS and Google can rent their GPU power for market premiums. This is where the financial market opportunity lies.

Compute: The AI Commodity

Cloud Service Providers (Coreweave, Crusoe, Northern Data, Lambda)

• Although much of the compute market infrastructure belongs to the FAANG companies, other players also saw this trend emerging and capitalized early.

• CoreWeave is a GPU operator that started with ETH mining and pivoted presciently into GPU availability for AI and other compute-intensive uses.

• Crusoe began with a novel approach to BTC mining – using flared natural gas (which would otherwise be wasted) to power BTC miners. A forward-thinking company, Crusoe managed the BTC market cycles via sophisticated hedging and diversification and is building CrusoeCloud – a platform to provide compute to the AI and broader compute ecosystem.

• Northern Data provides HPC infrastructure and solutions including a generative AI cloud platform, reimagined Bitcoin mining operations, and next-generation data center infrastructure.

• Lambda Labs is a data center operator with the goal of offering lower pricing and access to GPUs – creating healthy competition with the CSPs.

Compute: The AI Commodity

Hardware Players

• The underlying bottleneck toward the growth of AI compute availability lies in the scarcity of hardware supply, which has catalyzed a hardware acceleration arms race to increase efficiency in compute processes.

• While consumer-grade personal computing can be achieved on CPUs within a PC, we have seen earlier that AI training and inference require more advanced hardware (parallelized GPUs) as well as the associated bandwidth, power, and maintenance setup.

• All the ”compute refineries” – CSPs as well as smaller players like CoreWeave and Crusoe – need hardware. That is why the biggest public ”winner” in the AI space has been NVIDIA – a company that had the vision for AI for decades and developed the optimized chips (NVIDIA H100) as well as the connectivity infrastructure (Infiniband, NVLINK) for data centers.

Compute: The AI Commodity

The Role of Power in the Compute Stack

Last, but not least, we arrive at a cost in the compute stack that is not mentioned as mainstream, but that may change in the future. Currently, the cost of GPU hardware is so high that it dwarfs other cost items for AI compute. We do believe that over time, hardware costs will become more competitive and AI infrastructure will be more ubiquitous, which could result in more efficient markets. As the AI market becomes more efficient, power cost will likely matter more and more.

For a historical example, we look to the Bitcoin mining industry. In the early days of BTC mining, when mining could occur on consumer-grade computers, power was less of an optimization variable. However, as hardware ossified (to ASICs) and profitability became competitive (with the rise of industrial-sized mining pools), power is now one of the most important drivers of BTC mining success.

Could a similar trend occur in AI? And will AI power need to focus on renewables/green energy? We think so – and we think power markets will be a big component of the financial market innovation that will flourish with the growth of AI.

Financial Markets Opportunities from AI

What Financial Products Will AI Unlock?

Finally, we arrive at the opportunity set in financial markets. We have explored a bird’s eye view of AI applications and we looked at front-end infrastructure players, as well as the players in the compute ecosystem that powers AI.

One major opportunity remains untapped – the development of financial markets. What products and services could arise from financializing the AI economy?

We will brainstorm potential financial structuring and products in this section, including:

• GPU-based financing / lending using GPUs as collateral

• Standardizing compute pricing so it can be a fungible, tradable product

• Establishing secondary markets for compute capacity

• End-to-end trading of compute as a commodity, both in a financial form and a technological form (physically moving workloads)

• Using compute as a commodity to bring edge datacenters and sophisticated financial markets players into the compute ecosystem

• Exploring the potential of “decentralized compute networks” to democratize AI compute and allow the average consumer to act as their own mini-data center

AI’s Financial Opportunities

GPU Based Financing

• GPUs are currently the lifeblood of AI compute, and are also the biggest bottleneck constraining the growth of AI.

• The difference between GPUs and BTC-mining ASICs is that GPUs can be used for many purposes, including AI model training/inference, gaming, graphics rendering, and other applications.

• As a result, GPUs can be a form of valuable collateral – meaning GPUs can be financialized into capital formation products.

• CoreWeave set a standard with a flagship debt raise collateralized by its fleet of GPUs. This collateral model will likely proliferate as GPU operators can tap financial debt markets to unlock working capital.

• Ultimately, as GPU processes become more fungible and streamlined, both big and small operators may be able to cross-collateralize GPUs to create a variety of financial products that could fuel further AI growth.

AI’s Financial Opportunities

Standardized Contract

• For compute to be financialized, it needs to have fungibility. Some fungibility opportunities are apparent; for example, model training does not have to occur all at once. It can be started and stopped, and potentially even ported from one data center to another (subject to data movement).

• Financial markets become more efficient with fungible, accessible products, which allow hedging and capital formation.

• We foresee a world where a commercial unit of compute could be potentially standardized and made fungible across different compute applications.

• In that world, a standard contract (one example brainstormed flow below) could be used to “trade” compute financially among compute users, compute refineries (data centers), and financial intermediaries.

AI’s Financial Opportunities

Secondary Market

• Once compute becomes more commoditized and standardized, a financial markets infrastructure could develop that mirrors the existing traditional financial system.

• This means that a secondary market could develop for compute – including a robust derivatives market – which allows CSPs, smaller players, and buyers of AI compute to hedge and forecast costs using financial products.

• Financial hedging of compute also could help secure financing for growth and development from lenders.

• BitOoda has already created financial products and new markets (such as the BitOoda Hash® Contract for the Bitcoin market) and has identified a secondary market for compute as the next potential frontier for the AI industry and financial industry to converge.

AI’s Financial Opportunities

Secondary Compute Players

• While we explored many front-facing AI applications in the earlier sections of this report, it is notable that the current AI boom is catalyzing innovation across the AI stack – including in the hardware and GPU availability space.

• One example of a startup tackling GPU capacity is Shadeform – a YCombinator backed startup with an accessible API and unified platform to connect compute buyers to available GPUs across a network.

• There will likely be ongoing iterations of this business model as the CSP model becomes more open source and GPU access becomes increasingly democratized over time.

• Another trend could be the “decentralization” of compute – a vision we will explore in our concluding example of financial market opportunities arising from the AI Renaissance.

AI’s Financial Opportunities

Decentralized Compute Possibilities

• “Decentralized compute,” which sounds like a forced marriage of the crypto and AI industries, is actually a very efficient endgame for the future of AI (and it does not actually require crypto in many cases).

• Right now, compute is fairly centralized, with supply coming from major players like CSPs.

• Meanwhile, consumer grade hardware is everywhere. iPhones and laptops are ubiquitous, and many devices have spare capacity (are not running full steam ahead at all times).

• The vision for decentralized compute is to create an “edge” market where any user’s device can contribute compute to a network – and be paid for it.

• On the crypto front, startups like Gensyn and Akash are creating incentivized compute networks.

Compute Infrastructure

BitOoda’s Role in the AI and Compute Ecosystems

As compute transforms into a financialized commodity, BitOoda has developed a flywheel of products and services designed to help clients gain access to the broad spectrum of resources and market solutions across the compute ecosystem.

We welcome collaboration on all the financial markets opportunities described in this report; please reach out with interest!

Conclusion

The Great Convergence

Since its inception, BitOoda has held a core thesis revolving around the concept of convergence: that the rise of new technology verticals – from AI to blockchain to zero knowledge proofs – would all lead to an unquenchable demand for compute.

Compute would therefore have to transform from a resource maintained by a few major cloud service providers and data centers into a commodity, which could be made portable and could trade across time and geographic locations.

Financialization has unlocked capital in the US ecosystem, with a robust equity and debt market helping companies grow by giving access to capital and unlocking buyers of that capital.

The financialization of compute is an inevitable step to increase capital access and efficiency in the compute markets and ultimately result in a more efficient allocation of compute as a commodity.

The rapid rise of AI is accelerating this convergence of compute markets with power markets and financial markets.

In the end, with financialization catalyzing a race to maturing compute markets and lower prices, the customer will be the ultimate winner.

Disclosures

Purpose

This research is only for the clients of BitOoda. This research is not intended to constitute an offer, solicitation, or invitation for any securities and may not be distributed into jurisdictions where it is unlawful to do so. For additional disclosures and information, please contact a BitOoda representative at info@bitooda.io.

Analyst Certification

Vivek Raman, the primary author of this report, hereby certifies that all of the views expressed in this report accurately reflect his personal views, which have not been influenced by considerations of the firm’s business or client relationships.

Conflicts of Interest

This research contains the views, opinions, and recommendations of BitOoda. This report is intended for research and educational purposes only. We are not compensated in any way based upon any specific view or recommendation.

General Disclosures

Any information (“Information”) provided by BitOoda Holdings, Inc., BitOoda Digital, LLC, BitOoda Technologies, LLC or Ooda Commodities, LLC and its affiliated or related companies (collectively, “BitOoda”), either in this publication or document, in any other communication, or on or throughhttp://www.bitooda.io/, including any information regarding proposed transactions or trading strategies, is for informational purposes only and is provided without charge. BitOoda is not and does not act as a fiduciary or adviser, or in any similar capacity, in providing the Information, and the Information may not be relied upon as investment, financial, legal, tax, regulatory, or any other type of advice. The Information is being distributed as part of BitOoda’s sales and marketing efforts as an introducing broker and is incidental to its business as such.BitOoda seeks to earn execution fees when its clients execute transactions using its brokerage services. BitOoda makes no representations or warranties (express or implied) regarding, nor shall it have any responsibility or liability for the accuracy, adequacy, timeliness or completeness of, the Information, and no representation is made or is to be implied that the Information will remain unchanged. BitOoda undertakes no duty to amend, correct, update, or otherwise supplement the Information.

The Information has not been prepared or tailored to address, and may not be suitable or appropriate for the particular financial needs, circumstances or requirements of any person, and it should not be the basis for making any investment or transaction decision.The Information is not a recommendation to engage in any transaction.The digital asset industry is subject to a range of inherent risks, including but not limited to: price volatility, limited liquidity, limited and incomplete information regarding certain instruments, products, or digital assets, and a still emerging and evolving regulatory environment.The past performance of any instruments, products or digital assets addressed in the Information is not a guide to future performance, nor is it a reliable indicator of future results or performance.

Ooda Commodities, LLC is a member of NFA and is subject to NFA’s regulatory oversight and examinations. However, you should be aware that NFA does not have regulatory oversight authority over underlying or spot virtual currency products or transactions or virtual currency exchanges, custodians or markets.

BitOoda Technologies, LLC is a member of FINRA.

“BitOoda”, “BitOoda Difficulty”, “BitOoda Hash”, “BitOoda Compute”, and the BitOoda logo are trademarks of BitOoda Holdings, Inc.

Copyright 2023 BitOoda Holdings, Inc. All rights reserved. No part of this material may be reprinted, redistributed, or sold without prior written consent of BitOoda.

On November 30, 2022, OpenAI boldly launched ChatGPT – a user-friendly, interactive, front-end application for its flagship AI LLM product – sparking a sudden and monumental paradigm shift that opened the floodgates for AI proliferation.

Although artificial intelligence is by no means a new area of research and innovation, the Cambrian explosion for mass adoption was catalyzed by the launch of ChatGPT. Indeed, ChatGPT took the somewhat abstract and nebulous concept of AI and repackaged it into a universal consumer-grade application. Its success was unparalleled: ChatGPT took a mere two months to reach the 100 million user mark. For context, it took the wildly popular TikTok app nine months and Instagram 2.5 years respectively to reach the 100 million user mark. It is not an exaggeration to say that ChatGPT was the “Google” or ”iPhone” moment for unlocking AI for consumer-grade and institutional-grade applications.

What are some of the novel applications that the proliferation of LLMs (large language models) create? We separate these into “static” and “dynamic” AI applications. ”Static” AI applications include chatbots and image generation; “dynamic” applications include AI agents.

After exploring the design space for AI applications, we turn to the major infrastructure players of the emerging AI industry. These include the current incumbents in the LLM space, such as OpenAI and Meta’s LLaMA. We also look at the ”middleware” layer, such as Langchain, which offers a toolset for anyone to customize and train their own AI applications, sparking a new set of innovation potential at the startup app level and democratizing access to the LLMs via API integration.

While the application layer will be the most visible and prominent result of the AI boom, the infrastructure layer is arguably just as important. Digging deeper into the infrastructure part of the “AI stack,” we arrive at the “digital commodity” that fuels the AI industry – which we know as “compute.” (For an overview of compute, see our 5/12/23 report titled “High Performance Compute: A Primer” and our 6/2/23 report titled “The Compute Stack.”)

Although AI applications seem magical on the surface and will spark disruption across all industries, running AI applications is not free. The compute cost is currently the constraining factor for AI growth and is catalyzing a modern-day Gold Rush for compute providers to accumulate GPU infrastructure.

NVIDIA is currently the kingmaker of the GPU industry and provides the most cutting-edge, AI-configured GPUs (the H100 and A100 models). This has resulted in a massive supply-demand imbalance, where the demand for computing resources is seemingly infinite, while the supply of compute hardware is constrained.

We believe this supply-demand imbalance is a product of the early part of the AI boom, and as AI applications, software, and infrastructure mature, there will be novel ways of democratizing access to compute. One way to do so is to introduce financial markets for the AI compute ecosystem.

Ultimately, we conclude that the financialization of compute as the commodity powering AI is inevitable, and we explore financial markets opportunities arising from the AI revolution. These opportunities range from financing and lending facilities using GPU collateral, to standardizing compute contracts, to establishing a secondary market for compute, and to expanding existing compute infrastructure into a decentralized, democratized system that is accessible to the broader community.

“Static” AI Applications

Initial Applications from the AI Revolution

The first foray of widespread AI applications is based on LLMs (large language models) – this is due to the resounding success and immediate product-market fit of ChatGPT.

Large language models are one of the many subsets of the artificial intelligence realm in which models are “trained on,” or fed, vast amounts of text data to learn patterns of natural language and therefore generate robust, human-like text.

While the previous version of chat apps, known as chatbots, were much simpler – programmed to respond to specific inputs and output pre-set responses – LLMs like ChatGPT use machine learning to generate responses. Therefore, rather than being programmed to output specific responses, ChatGPT “learns” based on the text data it is trained on and therefore generates more complex, varied responses.

Nevertheless, LLMs are the tip of the iceberg of AI’s potential, as LLMs are trained from “static” data sets to produce ”static” AI applications, which then need to be re-trained or fed more data to continue ”learning.”

In this section, we will explore other ”static” AI applications, which include:

• LLMs for chatting, e.g., ChatGPT

• LLMs for Image generation, e.g., DALL-E

• Video / music generation, e.g., Replicate

“Static” AI Applications

Chat Platform

• ChatGPT represents the “iPhone” or “Google” moment for AI. Although AI research has been ongoing for decades, the true consumer product-market fit was resoundingly achieved when OpenAI released ChatGPT.

• After ChatGPT, other AI-powered chat platforms joined the fray, including Google’s Bard and Anthropic’s Claude.

• Chat platforms, powered by LLMs, change the way humans can interact with knowledge. Instead of manually aggregating data and information from a variety of sources (e.g., the “Google search method”), users can simply ask an AI chat platform a question. The platform, using a trained LLM, will aggregate data from its training and emit a human-readable response.

• While chat platforms are the foundational building block for several AI applications, they do represent a better “Google” in their static form and have plenty of potential to grow their functionality and offering.

“Static” AI Applications

Image Generation

• The power of AI extends beyond text to image generation – unlocking a whole new dimension of content generation. Tools like DALL-E, developed by OpenAI, can take a short text prompt and output a series of images that depict the text (see example below).

• How does image generation AI work under the hood? Instead of using OpenAI’s GPT (used for chat AI by ChatGPT), DALL-E uses an OpenAI model called CLIP – or Contrastive Language-Image Pre-training.

• CLIP is trained with hundreds of millions of images along with their captions so that it can learn how text and images correspond. This training allows the CLIP model to learn how related a caption is to an image.

• DALL-E then uses a diffusion model to generate an image based on CLIP text embeddings, ultimately linking the text to images and then generating the photorealistic image in a seemingly magical feat.

“Static” AI Applications

Video / Music Generation

• This combination of image training (CLIP) and image generation (diffusion) can be extended from static pictures to AI-generated videos and even AI-generated music.

• The music-gen model from Meta’s AI research lab can take a text input (like DALL-E or ChatGPT) and output a music clip representing the text.

• The implications of these static AI applications – from chat applications to AI image generation to AI music / video generation – are vast. The companies that could be displaced span across industries and across employee levels.

• Any sort of low-level content writing, or research and aggregation, could be swiftly replaced by chat apps like ChatGPT. Art could be massively disrupted by tools like DALL-E, and the entire entertainment sector could be reshaped using AI-generated content (AI scripts, AI videos, AI voice / music).

“Dynamic” AI Applications

How Can AI Applications Expand Beyond “Static” AI?

The potential of “static” AI applications alone is vast and can reshape many industries, from writing to art and film to the “search engine” model made ubiquitous by Google. Much of day-to-day communication, sales, document analysis, and legal work could potentially be automated via static AI applications.

However, this is merely the tip of the iceberg. What if, instead of using just historical data sets to train AI models, AI could interact with everyday real-time data and actually perform tasks? Then, instead of being a souped-up version of Google’s “I’m Feeling Lucky” button, what if AI could dynamically analyze and execute instructions?

"Dynamic” AI applications, which make technology start to look like science fiction, include the following:

• Fine tuning (customizing LLM models)

• Plug-ins (interacting with live data)

• Code interpreter (an AI analyst)

• Function calling (programming AI to execute tasks)

• AI Agents (e.g., BabyAGI or semi-autonomous AI)

• Neural Networks (neural nets) for ”computer vision” (e.g., Tesla Full Self-Driving)

“Dynamic” AI Applications

Fine-Tuning

• Most people who have used ChatGPT notice that there is a knowledge cutoff date of September 2021. This means that the “knowledge” – i.e., the text and data – fed to train the GPT model is only current through 2021. While this is plenty of data for historical inquiries, it does create a limitation where ChatGPT cannot interact with more current or real-time data.

• If ChatGPT, DALL-E, and video/music generation were completely static and could only respond with data before 2021, the innovation would still be monumental. However, models like GPT are customizable via a process called “fine-tuning.” Fine-tuning allows for users to retrain, or feed additional data and information, to the existing models to achieve more customized, more current, and more efficient outputs.

• Fine-tuning makes static AI models dynamic and unlocks vast AI potential.

“Dynamic” AI Applications

Plugins

• ChatGPT plugins are an example of fine-tuning ChatGPT to integrate real-time dynamic functionality. Plugins were the first major dynamic AI application to flourish right after the release of ChatGPT.

• Plugins are third-party extensions that build upon ChatGPT to extend functionality – via accessing realtime information, running computations on external data (not just the data the ChatGPT is trained on), or interacting with third-party services like websites.

• Plugins start to unlock the potential of dynamic AI applications. Some examples of plugins include: a plugin on the Kayak or Expedia website that answers real-time travel questions (vs. ChatGPT that would be limited to a knowledge base from 2021). A “summarizing” plugin can take content from any website and create a bullet point summary. Other examples of plugins, such as “AskYourPDF,” in the ChatGPT plugins store are shown below.

“Dynamic” AI Applications

Code Interpreter

• OpenAI Code Interpreter is one of the most consequential plugins that enables ChatGPT to do two things: (1) run code in a conversational manner, which is important for all types of data analytics, and (2) access files uploaded to Code Interpreter.

• This is a very powerful combination. In the traditional workflow, an example task would be for a human to take an Excel sheet full of data and then format it, analyze it, and produce outputs (graphs, tables).

• Code Interpreter eliminates the human element. Instead, a human can upload the Excel sheet, give instructions to ChatGPT to form analyses and conclusions about the data, and ask ChatGPT to output the relevant graphs and tables.

• Code Interpreter is a game changer – it could automate the creation of PowerPoints, Excel sheets, and ultimately replace the “analyst” role.

“Dynamic” AI Applications

Function Calling

• OpenAI Function Calling is where the true power of dynamic AI applications begins to shine. The current iteration of ChatGPT’s static AI is, in essence, a souped-up version of Google. It produces high quality text outputs but still requires humans to provide prompts and act on information. What if, in addition to providing a text output, ChatGPT could also execute functions and perform tasks on behalf of users?

• This is what Function Calling achieves. For example, instead of ChatGPT simply composing the text as an email response, Function Calling allows ChatGPT to both compose the response and send the email without human interaction. Functions bring programmability to ChatGPT and allow it to interact with the outside world. Ultimately, function calling could lead to ChatGPT making Amazon purchases or booking flights on behalf of users – it is the first step to true autonomous AI utility.

“Dynamic” AI Applications

AI Agents

• The ultimate endgame of AI is the development of AGI - ”artificial general intelligence” – whereby AI can dynamically learn and execute on its own without human intervention. As we can see from many of the dynamic AI tasks described in this section, introducing plugins, code interpreter, and function calling are steps toward achieving autonomous AGI.

• One intermediate step toward AGI is the concept of an “AI Agent” – which is an AI-powered software program that interacts with its environment to achieve specific goals without continual human interaction. AI Agents provide a natural evolution to AGI as they act increasingly autonomously.

• “BabyAGI,” one implementation of an AI agent, is an example of an intermediate step. By recursively using function calls and plugins to interact with its environment, BabyAGI can act as an AI agent and complete series of tasks semi-autonomously, dynamically reacting based on the results.

“Dynamic” AI Applications

Neural Net

• It would be incomplete to outline dynamic AI applications and not mention Tesla’s involvement. Tesla has been on the cutting edge of AI for years, with the specific focus of using AI to instruct cars to ultimately be fully autonomous. In effect, Tesla created software acting like AI agents for cars.

• To achieve full self driving, Tesla uses the well-known AI structure called a “neural network,” which is a foundational superset of all AI models (including LLMs). Set up much like the human brain, a neural net is composed of layers of interconnected nodes (neurons) that process data to produce output. While GPT’s neural nets are trained using text, Tesla’s neural nets are designed for computer vision tasks, like object detection.

• Tesla’s neural nets, when applied to the full self driving cars, allow cars to process data from cameras (computer vision) and make decisions about driving – avoiding obstacles, changing lanes, detecting other cars, etc.

AI Landscape: Providers and Infrastructure

Who Are the Players?

The “AI stack” consists of players that maintain, train, and release models (e.g., LLMs), as well as the back-end hardware and infrastructure required for the “compute” that fuels AI applications.

We will examine the compute ecosystem in depth in the next section; in this section, we examine some of the early leaders in providing AI models. These include OpenAI, the parent of ChatGPT, as well as competing models from Facebook, Google, Anthropic, Tesla, and more.

We also provide a brief overview of the software tooling that enables everyday access to these AI models via API connectivity, which unlocks the potential of AI application proliferation.

AI Landscape

OpenAI

• While ChatGPT was the hyperscaling moment for AI applications, there have been decades of foundational work going into the field of artificial intelligence. Many contenders were working on AI and set up as R&D companies – OpenAI was one of them. The ultimate goal of these AI R&D companies is the same – to converge upon a safe, ethical form of artificial general intelligence that ushers a new technological Renaissance. The stakes, however, are high – plenty of dystopian futures also exist where AGI becomes fully autonomous and usurps human moral constructs.

• OpenAI, the parent company of ChatGPT, has several product lines. ChatGPT Enterprise was recently announced to provide more robust and customizable LLMs for companies, along with enterprise-grade security.

• Other OpenAI products include DALL-E (image generation), and a robust API for developers to access and build applications integrating GPT / DALL-E.

AI Landscape

Meta AI - Llama

• While ChatGPT is currently the household name for consumer-grade AI, it is by no means the only game in town. Meta has been a behemoth in the AI space and has released its fully open-source, downloadable, and customizable LLM called Llama.

• Llama, like GPT, is pretrained on a vast data set and can be fine-tuned and customized in a similar manner.

• Llama also has a code-specialized version called Code Llama, which generates programming code in many languages (Python, Javascript/Typescript, Bash, C++) from human prompts. This helps democratize access to coding and can increase engineering productivity across all software organizations.

• Meta’s foray into LLMs extends beyond code and chat applications to image and video generation, as seen in the previous section.

AI Landscape

Google - Bard

• As the AI frenzy gained steam in late 2022 / early 2023, Google was late to the party. However, it is now becoming a prerequisite (rather than just an optional enhancement) for the large technology companies to both integrate and innovate in the AI sector.

• Google’s response to ChatGPT and Llama was its own AI chat application called Bard. Bard is powered by Google’s own LLM called PaLM 2.

• Ironically, Google went from being very early to lagging in the AI space. Google had built its own neural net architecture in 2017 called Transformer, which OpenAI may have also integrated in its development of ChatGPT. However, ChatGPT functionality leapfrogged Transformer, causing Google to upgrade to its own model.

• Microsoft taking a stake in OpenAI forced Google to accelerate its AI push. In the end, increased competition results in better consumer AI products.

AI Landscape

Anthropic - Claude

• While we have covered the behemoth AI providers (OpenAI / Microsoft, Llama / Meta, Bard / Google), there are many other contenders in the AI R&D space that are not part of the FAANG company umbrella.

• One such competitor is Anthropic, another R&D company developing AI systems and building toward the ultimate AGI singularity. Anthropic was founded by former members of OpenAI and remains privately funded.

• Anthropic’s flagship consumer-grade LLM product is called Claude and is an AI chat app similar to ChatGPT, Bard, and Llama. Claude is accessible via API, has a built-in code generator, and has an additional prime directive toward safety and appropriate (harmless) responses. Ethics will inevitably be a major existential debate as AI blossoms, and Anthropic is tackling this now.

AI Landscape

Hugging Face

• We have explored enough permutations of LLMs / AI chat applications to get a cross section of the major players. Now let’s look at different infrastructure tools for AI applications – one of which is Hugging Face.

• Hugging Face embodies the aggregator model by building a hub for all AI players to build, train, and deploy their own custom models. Just like GitHub is a shared public repository for all developers (retail, corporate, enthusiast) to publish and share their code, Hugging Face is like the “AI GitHub” for AI builders to share their models.

• Hugging Face has aggregated an impressive toolbox of over 120,000 models, 30,000 data sets, and 50,000 demo apps (called Spaces). These are all publicly available and can be downloaded and tested. Individuals as well as the major AI titans like Microsoft, Google, and Meta use the tools and repositories available on Hugging Face.

AI Landscape

Tesla / xAI

• We briefly touched on Tesla’s foray into AI development – mostly by fusing AI software with hardware. Whether it is training cars to drive autonomously (Tesla FSD) or building robots that act and perform tasks using AI software, Tesla has been at the cutting edge of AI.

• Elon Musk is famous for doubling down on his bets – and it is no surprise that he announced the creation of xAI, a nebulous but likely cutting-edge new AI-focused operation meant to “understand the true nature of the universe.” While the details are sparse, Musk / Tesla have been acquiring NVIDIA H100s in bulk to set up the necessary “compute factory” to build xAI into an R&D behemoth.

• Ultimately, Musk plans to integrate the AI capabilities developed by xAI into his Twitter/X “super app” – hence weaving social media, fintech, and AI into one grand vision as per his brand.

AI Landscape

LangChain

• Finally, we conclude the AI application infrastructure landscape with a “middleware” tool. One potential issue that could arise given the proliferation of LLMs across various companies and standards is that connecting and accessing each independent AI model is difficult.

• This is where a tool like LangChain comes in. AI development, like all development, will likely be accelerated if it is open source where individuals can build customized applications and businesses on top of existing models.

• However, how can developers easily connect to the OpenAI API, the Llama API, and the Anthropic API, and integrate smaller models from Hugging Face? LangChain developed a type of “wrapper aggregator” API that allows for quicker, seamless access to many AI models.

• LangChain supports both Python, the de facto programming language of AI, and Javascript, the de facto programming language of the Internet.

Compute: The AI Commodity

What is the Economic Model Driving AI?

We’ve explored some of the major providers of AI models and applications – this could be considered the “front-end” of the AI stack. However, beneath this front-end is a vast infrastructure layer comprising the “back-end” of the AI stack which drives the high cost of using AI.

Some of these back-end AI costs include:

Training: feeding data to a base AI model (e.g., an LLM) so it is ready for consumer user. As a reference point, some estimates for training GPT-3 (one of the models powering ChatGPT) range from $500,000 to $4.6 million.

Inference: querying a pre-trained model (like ChatGPT) with a prompt or inquiry. Inference is much cheaper than training.

External Infrastructure: instead of setting up custom GPU data centers, most AI consumers will use existing infrastructure managed by large cloud service providers (CSPs) and pay the CSP a premium to the raw compute cost.

Hardware: other AI operators will elect to purchase and stand up their own AI hardware setup. Although the Holy Grail for AI is currently GPUs (specifically, NVIDIA’s H100 GPUs), there is also tertiary hardware infrastructure required, such as InfiniBand and NVLink (also provided by NVIDIA) to connect GPUs within a data center.

Power: while the spotlight for AI costs has focused on GPU hardware, power is the underlying commodity driver necessary for data centers to successfully operate. While power costs are currently negligible compared to GPU hardware costs, this may change as AI hardware becomes more efficient (mirroring the Bitcoin mining industry, where power is a significant driver of profitability).

All of these costs can be aggregated to represent computational power, or “compute.” As such, compute is the underlying commodity that powers AI.

We will explore each of these “back-end” AI costs in more detail and then conclude with the following question:

Can innovative new financial markets products unlock capital and better compute pricing for the AI industry?

Compute: The AI Commodity

Model Training and Inference

Let’s explore training and inference in more detail – as both operations are the lifeblood of AI models and form the drivers of the currently high-cost structure of AI compute.

• Training a model refers to feeding a model data, which it uses to formulate responses (whether text, image, video, or directions to send to a FSD car).

• Inference refers to the act of querying a pre-trained model to get the actual response.

• Training a transformer model (e.g., an LLM) takes three times as long as doing inference. However, since training data is much larger (estimated at 300 million times larger) than inference data (which typically consists of a prompt), training takes around a billion times longer than inference.

• The table below shows the compute requirements for different LLMs along with the size of the training data (vs negligible word size for inference prompts). The key takeaway is that to do a training or inference job on one consumer-grade GPU would be impossible. This is why an entire industry is being built around compute infrastructure – which requires entire data centers of GPUs that are parallelized and connected to one another to handle the massive computational load demanded by the AI industry.

Compute: The AI Commodity

Cloud Service Providers (AWS, Google, Microsoft)

• The natural outcome from having a very constrained commodity (compute) that is in high demand is for incumbent players with existing infrastructure to thrive. Indeed, when considering buying a fleet of top-of-the-line GPUs (which are massively backlogged) vs renting the GPUs from a company with available compute capacity, most AI players choose the latter.

• Although the retail AI boom is new, AI is not; the large players (Cloud Service Providers, or CSPs) such as Amazon, Google, and Microsoft have been investing in hardware infrastructure for years and therefore have been the go-to distributors of compute for AI companies.

• The largest providers with the most capacity are able to set the market price, and therefore players like AWS and Google can rent their GPU power for market premiums. This is where the financial market opportunity lies.

Compute: The AI Commodity

Cloud Service Providers (Coreweave, Crusoe, Northern Data, Lambda)

• Although much of the compute market infrastructure belongs to the FAANG companies, other players also saw this trend emerging and capitalized early.

• CoreWeave is a GPU operator that started with ETH mining and pivoted presciently into GPU availability for AI and other compute-intensive uses.

• Crusoe began with a novel approach to BTC mining – using flared natural gas (which would otherwise be wasted) to power BTC miners. A forward-thinking company, Crusoe managed the BTC market cycles via sophisticated hedging and diversification and is building CrusoeCloud – a platform to provide compute to the AI and broader compute ecosystem.

• Northern Data provides HPC infrastructure and solutions including a generative AI cloud platform, reimagined Bitcoin mining operations, and next-generation data center infrastructure.

• Lambda Labs is a data center operator with the goal of offering lower pricing and access to GPUs – creating healthy competition with the CSPs.

Compute: The AI Commodity

Hardware Players

• The underlying bottleneck toward the growth of AI compute availability lies in the scarcity of hardware supply, which has catalyzed a hardware acceleration arms race to increase efficiency in compute processes.

• While consumer-grade personal computing can be achieved on CPUs within a PC, we have seen earlier that AI training and inference require more advanced hardware (parallelized GPUs) as well as the associated bandwidth, power, and maintenance setup.

• All the ”compute refineries” – CSPs as well as smaller players like CoreWeave and Crusoe – need hardware. That is why the biggest public ”winner” in the AI space has been NVIDIA – a company that had the vision for AI for decades and developed the optimized chips (NVIDIA H100) as well as the connectivity infrastructure (Infiniband, NVLINK) for data centers.

Compute: The AI Commodity

The Role of Power in the Compute Stack

Last, but not least, we arrive at a cost in the compute stack that is not mentioned as mainstream, but that may change in the future. Currently, the cost of GPU hardware is so high that it dwarfs other cost items for AI compute. We do believe that over time, hardware costs will become more competitive and AI infrastructure will be more ubiquitous, which could result in more efficient markets. As the AI market becomes more efficient, power cost will likely matter more and more.

For a historical example, we look to the Bitcoin mining industry. In the early days of BTC mining, when mining could occur on consumer-grade computers, power was less of an optimization variable. However, as hardware ossified (to ASICs) and profitability became competitive (with the rise of industrial-sized mining pools), power is now one of the most important drivers of BTC mining success.

Could a similar trend occur in AI? And will AI power need to focus on renewables/green energy? We think so – and we think power markets will be a big component of the financial market innovation that will flourish with the growth of AI.

Financial Markets Opportunities from AI

What Financial Products Will AI Unlock?

Finally, we arrive at the opportunity set in financial markets. We have explored a bird’s eye view of AI applications and we looked at front-end infrastructure players, as well as the players in the compute ecosystem that powers AI.

One major opportunity remains untapped – the development of financial markets. What products and services could arise from financializing the AI economy?

We will brainstorm potential financial structuring and products in this section, including:

• GPU-based financing / lending using GPUs as collateral

• Standardizing compute pricing so it can be a fungible, tradable product

• Establishing secondary markets for compute capacity

• End-to-end trading of compute as a commodity, both in a financial form and a technological form (physically moving workloads)

• Using compute as a commodity to bring edge datacenters and sophisticated financial markets players into the compute ecosystem

• Exploring the potential of “decentralized compute networks” to democratize AI compute and allow the average consumer to act as their own mini-data center

AI’s Financial Opportunities

GPU Based Financing

• GPUs are currently the lifeblood of AI compute, and are also the biggest bottleneck constraining the growth of AI.

• The difference between GPUs and BTC-mining ASICs is that GPUs can be used for many purposes, including AI model training/inference, gaming, graphics rendering, and other applications.

• As a result, GPUs can be a form of valuable collateral – meaning GPUs can be financialized into capital formation products.

• CoreWeave set a standard with a flagship debt raise collateralized by its fleet of GPUs. This collateral model will likely proliferate as GPU operators can tap financial debt markets to unlock working capital.

• Ultimately, as GPU processes become more fungible and streamlined, both big and small operators may be able to cross-collateralize GPUs to create a variety of financial products that could fuel further AI growth.

AI’s Financial Opportunities

Standardized Contract

• For compute to be financialized, it needs to have fungibility. Some fungibility opportunities are apparent; for example, model training does not have to occur all at once. It can be started and stopped, and potentially even ported from one data center to another (subject to data movement).

• Financial markets become more efficient with fungible, accessible products, which allow hedging and capital formation.

• We foresee a world where a commercial unit of compute could be potentially standardized and made fungible across different compute applications.

• In that world, a standard contract (one example brainstormed flow below) could be used to “trade” compute financially among compute users, compute refineries (data centers), and financial intermediaries.

AI’s Financial Opportunities

Secondary Market

• Once compute becomes more commoditized and standardized, a financial markets infrastructure could develop that mirrors the existing traditional financial system.

• This means that a secondary market could develop for compute – including a robust derivatives market – which allows CSPs, smaller players, and buyers of AI compute to hedge and forecast costs using financial products.

• Financial hedging of compute also could help secure financing for growth and development from lenders.

• BitOoda has already created financial products and new markets (such as the BitOoda Hash® Contract for the Bitcoin market) and has identified a secondary market for compute as the next potential frontier for the AI industry and financial industry to converge.

AI’s Financial Opportunities

Secondary Compute Players

• While we explored many front-facing AI applications in the earlier sections of this report, it is notable that the current AI boom is catalyzing innovation across the AI stack – including in the hardware and GPU availability space.

• One example of a startup tackling GPU capacity is Shadeform – a YCombinator backed startup with an accessible API and unified platform to connect compute buyers to available GPUs across a network.

• There will likely be ongoing iterations of this business model as the CSP model becomes more open source and GPU access becomes increasingly democratized over time.

• Another trend could be the “decentralization” of compute – a vision we will explore in our concluding example of financial market opportunities arising from the AI Renaissance.

AI’s Financial Opportunities

Decentralized Compute Possibilities

• “Decentralized compute,” which sounds like a forced marriage of the crypto and AI industries, is actually a very efficient endgame for the future of AI (and it does not actually require crypto in many cases).

• Right now, compute is fairly centralized, with supply coming from major players like CSPs.

• Meanwhile, consumer grade hardware is everywhere. iPhones and laptops are ubiquitous, and many devices have spare capacity (are not running full steam ahead at all times).

• The vision for decentralized compute is to create an “edge” market where any user’s device can contribute compute to a network – and be paid for it.

• On the crypto front, startups like Gensyn and Akash are creating incentivized compute networks.

Compute Infrastructure

BitOoda’s Role in the AI and Compute Ecosystems

As compute transforms into a financialized commodity, BitOoda has developed a flywheel of products and services designed to help clients gain access to the broad spectrum of resources and market solutions across the compute ecosystem.

We welcome collaboration on all the financial markets opportunities described in this report; please reach out with interest!

Conclusion

The Great Convergence

Since its inception, BitOoda has held a core thesis revolving around the concept of convergence: that the rise of new technology verticals – from AI to blockchain to zero knowledge proofs – would all lead to an unquenchable demand for compute.

Compute would therefore have to transform from a resource maintained by a few major cloud service providers and data centers into a commodity, which could be made portable and could trade across time and geographic locations.

Financialization has unlocked capital in the US ecosystem, with a robust equity and debt market helping companies grow by giving access to capital and unlocking buyers of that capital.

The financialization of compute is an inevitable step to increase capital access and efficiency in the compute markets and ultimately result in a more efficient allocation of compute as a commodity.

The rapid rise of AI is accelerating this convergence of compute markets with power markets and financial markets.

In the end, with financialization catalyzing a race to maturing compute markets and lower prices, the customer will be the ultimate winner.

Disclosures

Purpose

This research is only for the clients of BitOoda. This research is not intended to constitute an offer, solicitation, or invitation for any securities and may not be distributed into jurisdictions where it is unlawful to do so. For additional disclosures and information, please contact a BitOoda representative at info@bitooda.io.

Analyst Certification

Vivek Raman, the primary author of this report, hereby certifies that all of the views expressed in this report accurately reflect his personal views, which have not been influenced by considerations of the firm’s business or client relationships.

Conflicts of Interest

This research contains the views, opinions, and recommendations of BitOoda. This report is intended for research and educational purposes only. We are not compensated in any way based upon any specific view or recommendation.

General Disclosures

Any information (“Information”) provided by BitOoda Holdings, Inc., BitOoda Digital, LLC, BitOoda Technologies, LLC or Ooda Commodities, LLC and its affiliated or related companies (collectively, “BitOoda”), either in this publication or document, in any other communication, or on or throughhttp://www.bitooda.io/, including any information regarding proposed transactions or trading strategies, is for informational purposes only and is provided without charge. BitOoda is not and does not act as a fiduciary or adviser, or in any similar capacity, in providing the Information, and the Information may not be relied upon as investment, financial, legal, tax, regulatory, or any other type of advice. The Information is being distributed as part of BitOoda’s sales and marketing efforts as an introducing broker and is incidental to its business as such.BitOoda seeks to earn execution fees when its clients execute transactions using its brokerage services. BitOoda makes no representations or warranties (express or implied) regarding, nor shall it have any responsibility or liability for the accuracy, adequacy, timeliness or completeness of, the Information, and no representation is made or is to be implied that the Information will remain unchanged. BitOoda undertakes no duty to amend, correct, update, or otherwise supplement the Information.

The Information has not been prepared or tailored to address, and may not be suitable or appropriate for the particular financial needs, circumstances or requirements of any person, and it should not be the basis for making any investment or transaction decision.The Information is not a recommendation to engage in any transaction.The digital asset industry is subject to a range of inherent risks, including but not limited to: price volatility, limited liquidity, limited and incomplete information regarding certain instruments, products, or digital assets, and a still emerging and evolving regulatory environment.The past performance of any instruments, products or digital assets addressed in the Information is not a guide to future performance, nor is it a reliable indicator of future results or performance.

Ooda Commodities, LLC is a member of NFA and is subject to NFA’s regulatory oversight and examinations. However, you should be aware that NFA does not have regulatory oversight authority over underlying or spot virtual currency products or transactions or virtual currency exchanges, custodians or markets.

BitOoda Technologies, LLC is a member of FINRA.

“BitOoda”, “BitOoda Difficulty”, “BitOoda Hash”, “BitOoda Compute”, and the BitOoda logo are trademarks of BitOoda Holdings, Inc.

Copyright 2023 BitOoda Holdings, Inc. All rights reserved. No part of this material may be reprinted, redistributed, or sold without prior written consent of BitOoda.

Related Research