This report is the final installment to a four part series on DeAI:
Part 1 – DeAI I: The Tower and the Square (Big Tech vs DeAI Overview)
Part 2 – DeAI II: Seizing the Means of Production (Infrastructure)
Part 3 – DeAI III: Composable Compute (Middleware)
Part 4 – The Agentic Economy (Apps)

Vox Populi: War of the Rails
The fabric of the internet is shifting. In under three years, we may very well see PHD level silicon intelligence in many domains. LLMs are rapidly proving capable of playing a human-like role in combining memory, reasoning, and tool utilization to accomplish higher-order tasks.

The marginal cost of intelligence descending to zero will have broad ramifications for how users’ demands are met. Chatbots have already eaten considerably into search, but indexing the world’s information will pale in comparison to indexing active intelligence: a command bar for your life.
The traditional search + social discovery funnel -> “there’s an app for that” execution path may soon prove obsolete, replaced by expressed intents -> agent abstraction -> execution.

English is the hottest new programming language: an army of robots available for summons by anyone who speaks natural language. Robots which will increasingly summon themselves. We are shifting from a digital economy to an agentic economy, and as the shift is largely in the realm of software, it will be faster than most imagine.

Source: Davide Crapis: The Internet of Agents
However, crypto’s role in this rapidly approaching future is far from certain. In many ways, enterprises are better positioned. Apple Intelligence – your trusted on-device partner. Google Assistant – seamlessly integrating your inbox, your calendar, your browsing history, your YouTube algo, and location data. Other enterprises have the customer context, behavioral data, and distribution to better meet the needs of customers in many subdomains. Lobbyists will ensure regulations favor incumbents and stale financial rails. National security concerns will embalm the sector in red tape, hindering open innovation’s chances as the race to amass data and compute ascend the OOMs of the scaling laws.
Agents will inherit the internet, but whether crypto has a role to play or the agentic economy gets subsumed by Super Clusters, C-Corps, Stripe APIs, Swift, and Suits may well be determined by something as mundane as payments infrastructure.
In short, do blockchains provide a superior infrastructure on which to build this agentic economy or will they prove a distraction: a cute memecoin casino relegated to an ASI footnote in humanities’ multi-terabyte brain-machine-interface history module uploads in the decades to come?
In many ways, this is a question of “the spoils”. I’m betting crypto rails have a role to play in the coming agentic economy due to their open, permissionless, machine legible infrastructure and more seamless micro / cross border transactions, but I’m also not naïve enough to say this will be a “we-are-all-gonna-make-it-Kumbaya” inequality-reducing story. AI is happening and will bring abundance and inequality. The question is about how the spoils will be divided… and by whom?
Two versions of the future lay before us:
1) A continuation of the status quo: corporations continue as dominant first-class citizens under existing institutions. Agents are shoe-horned into existing frameworks, and we see an acceleration of the trends of the prior decade: greater centralization, greater concentration in market capitalization, a looming services deflation which keeps a lid on prices and allows for continued money printing and asset inflation, spoils allocated by the current halls of power.
Winners: boomer life-time politicians, existing managerial elite, Big Tech shareholders, homeowners, top AI labs, Washington D.C., Beijing, The Cantillon Effect
2) Reform: Craft a world in which agents, like corporations before them, become first class citizens – able to operate autonomously on open rails. A recognition that the AI revolution, like the industrial revolution before it, will upend societal organization – a profound shift in production that will require significant institutional reform to navigate.
In my opinion, crypto provides the best toolset to enable these reforms (or to build new ones) amidst the century’s technological inflection. An open, composable data, identity, and payments layer on which anyone can build – composable agents compounding at the speed of self-learning software. Infrastructure to enable pluralistic intelligences – synthesizing human and agentic inputs – in a tapestry of compute, commerce, capital, and governance. From shareholders to token holders. From corporations to agentic protocols. From primaries to futarchies.
Winners: internet native millennials / gen z, AI devs, solopreneurs, crypto bag holders. In short, the internet.
Currently, a shoe-horning of artificial intelligence into the existing financial and political institutions is the likely path forward. Large firms get larger, the percentage to labor diminishes, rocketing profits are reallocated by the political process. A ballooning role of government and the welfare state. Honestly, this could very well be the most sensible path forward.
Crypto’s proposed merger of AI and open networks is a gamble. A fundamental break with our industrial era institutions. A chance to keep pace with the acceleration by upgrading our base infrastructure. A bet on a 21st century renaissance. A bet that storming the Bastilles of synthetic intelligence and trusting our future to broader swaths of humanity can enable unmitigated human flourishing.
In short, a Guttenberg moment for artificial intelligence, that is the DeAI bull case.
Admittedly, this is risky. Unleashing superintelligence on a network of permissionless, un-censorable nodes could prove a terrible mistake.
And yet, the status quo feels even more untenable. AI safety is a very real concern but too often carries water for totalitarian leanings, either explicitly or via economic servitude. In the famed words of Lord Acton: “power tends to corrupt, and absolute power corrupts absolutely.” The current path will entrust a small group of elites, with unaligned incentives, with potentially the greatest concentration of power in human history.
Few men have done well under such temptations.
We are standing on the near bank of the Rubicon. The die is not yet cast. No alternative is bullet proof. Each step is a gamble when walking the narrow corridor between anarchy and leviathan, between annihilation and utopia.
The agentic economy will bring tens if not hundreds of trillions in value. The allocation of that value and power will depend on the infrastructure which underpins it.
The hundred-trillion dollar question: on which rails will this agentic economy be built?
That question, the home of agentic intelligence, may be answered by the enablers laid out below.
Enablers
Payments
Pros
Crypto’s ability to facilitate permissionless, programmable, cross-border, and even micro transactions with instant settlement is probably its greatest schelling point as the optimal destination to build the agentic economy.
The current financial system is a tapestry of jurisdictions and systems pieced together by global pipes like SWIFT and VISA / Mastercard with a host of banking intermediaries and high settlement times (T+2/T+15 depending on the payment type). This is just not the system one would design from scratch to underpin an economy driven by modular, compute efficient, networked intelligences with instant machine-to-machine communication originating from compute scattered across the globe.
Bank accounts and credit cards require stringent KYC making it difficult to onboard agents. Theoretically, AI agents could be authorized to use bank accounts or credit cards by an owner and leverage other enterprise APIs across the existing financial system, but clearly this would retard the pace of innovation and number of agents enormously, shackling the compounding of zero-marginal cost software to paper-based legal structures.
Again: safety and control vs. freedom and dynamism.
Many agentic use cases would likely rely on real-time machine-to-machine interactions and payments, potentially very small, and often cross border. Realistically, only high-throughput global state machines like Solana or Monad or scaled Layer 2s would be able to accommodate many of these use cases – almost certainly priced out by today’s highly-fragmented, legacy infrastructure.
The number of global transactions will inflect exponentially. In ten years time, the number of agents will likely dramatically exceed the number of humans on earth. SWIFT is not built for this. The options are to 1) massively throttle demand through regulation or 2) transition to a new infrastructure.
This doesn’t even require moving away from fiat. We have digital on-chain Eurodollars right now (to the tune of >US$172b) with annualized settlement of >US$7.5T. Given there are ~US$13 trillion in Eurodollars outstanding, there is a lot of room to ramp up on the new rails.

Source: rwa.xyz, Stablecoin Volumes
Several projects are already working on this:
- Coinbase launched the first agent to agent payment in August
- Skyfire is launching “the financial stack” for the AI economy
- Payman supports AI payments in stables, crypto, or regular bank accounts
- Nevermind is building a protocol for AI-to-AI payments partnering with other leading agent focused platforms like OLAS, SuperAgent, Flock.io, Polywrap and Naptha AI

Source: nevermind
Challenges:
Agentic payments are not without challenges which generally fall into two buckets:
- LLMs managing money is dangerous as they are prone to hallucination and non-deterministic outcomes which may misinterpret intents, clearly untenable in high-value transactions
- Autonomous key management is complex and fraught with risk
We will see how several projects are tackling these issues later in the application section.
Conclusion:
Payments have arguably been crypto’s Achilles’ heal to date: the regulatory barriers and horrendous UX / onboarding experience have held back adoption and the liquidity needed in a multi-sided network to underpin broader economic activity and “real world” use cases beyond speculation.
However, crypto’s global, permissionless, and software-native characteristics are well-placed to facilitate machine-to-machine payments – likely the dominant percentage of transactions in the near future. Ultimately, this is the bull case for web3 more broadly: AI agents injecting the long-absent demand into decentralized infrastructure’s decade long supply-side build out, and the activity may dwarf any possible forecast for human-based systems.
(Aside: Given the centrality of payments to crypto’s agentic economy thesis, Delphi plans to have a dedicated report to cover pro’s and con’s vs. fiat incumbent agentic roadmaps in the months ahead.)
However, payments is not the only variable to consider, and Web3’s tooling ecosystem is significantly behind that of web2.
Tooling & Orchestration
There is a credible argument to be made that the current generation of LLMs has already achieved AGI or human-level performance across a wide range of domains. The real bottleneck now is pushing these capabilities into “production” – moving from research breakthroughs into commercial development.
Alongside promising techniques like search and chain-of-thought which take advantage of greater inference utilization…
OpenAI Strawberry (o1) is out! We are finally seeing the paradigm of inference-time scaling popularized and deployed in production. As Sutton said in the Bitter Lesson, there’re only 2 techniques that scale indefinitely with compute: learning & search. It’s time to shift focus to… pic.twitter.com/jTViQucwxr
— Jim Fan (@DrJimFan) September 12, 2024
One of the obvious unlocks is enabling LLMs to use tools.
While LLMs are fantastic generalists, limitations include access to up-to-date information, hallucinations, low resource languages, or occasionally, even basic math. Prior to LLMs, humans have developed a sophisticated array of tools – from search engines to calculators to software applications – to complement our own limitations to boost productivity. Allowing LLMs similar access will enable another step change in performance: unleashing them from the walled gardens of chat rooms to coordinators of multi-step, complex tasks.
“The rise of tool-using LLMs marks the next big shift in computing. Just as the internet connected computers in a global network, LLMs are now connecting human requests to real world actions”
Tools convert synthetic minds from thinkers to doers.
ToolFormer
An example of this is ToolFormer, a recent paper which explores training LLMs to teach themselves to use external tools via simple APIs. The model is trained to decide which APIs to call, when to call them, what arguments are acceptable, and how to best incorporate the results into the future token prediction.
ToolFormer then finetunes the language model itself on the API calls it considers useful, learning when and how to use a variety of tools.
The real commercial application of LLMs will only come after we learn how to integrate them into more complex workflows. ToolFormer is proving LLMs can serve this coordination function within broader systems.
Voyager
Learning to use tools, however, is only the beginning. After all, “the models just wanna learn”, increasingly demonstrating capabilities of exploration and “self-play.”
This Voyager paper teaches an agent to play Minecraft: combining tool use with a penchant for game space exploration while constructing an ever-growing skill library of actions that are reusable, interpretable, and generalizable to novel tasks. The agent encounters new problems and tries to solve them iteratively, storing successful actions and “acquired skills” which can be redeployed or combined as it encounters novel challenges.

Figure 2: VOYAGER consists of three key components: an automatic curriculum for open-ended exploration, a skill library for increasingly complex behaviors, and an iterative prompting mechanism that uses code as action space.
*Source: https://arxiv.org/abs/2305.16291.*
The results were encouraging: obtaining 3.3x more unique items and traveling 2.3x longer distances than prior SOTA performance.

Figure 7: Map coverage: bird’s eye views of Minecraft maps.
While Minecraft is a fun example, the approach here – of self-play and compounding learning – should not be under-estimated. AlphaGo, famously, started with only the rules of the game and, after playing itself millions of times beat world champion Lee Sedol. Voyager is an early look at pushing this technique from clearly defined domains like Go to more open-ended environments like virtual worlds. The potential is clear: many expect GAN type simulations to play a roll in AI labs scaling “the data wall”. Could we take a similar approach in exploring molecular compounds? Gene therapies? Novel materials?
And while the ability of LLMs to use tools and even to self-learn is encouraging, the compounding is unlikely to stop at the individual agent level. After all, LLMs are just a single component in a larger system that includes memory, knowledge, tools, simulators and… other models.
Learning how to optimize at the meta level is where systems start to get very interesting…
GPTSwarm
Just as single agents may be able to optimize performance through learning, GPTSwarm documents using graph-based approaches to optimize complex multi-agent systems.
Chain-of-Thought, React, Tree-of-Thought, Reflexion, Graph-of-Thought etc are all emerging approaches to prompt LLMs in a more structured way to better harness their capabilities towards specific tasks. And while frameworks like AutoGPT, BabyAGI, Langchain, and Llama-Index can be helpful in a wide range of functionalities, the explosion of LLMs point to a future which strings together multiple, smaller, specialized models into multi-agent systems.
GPTSwarm describes entire systems using computational graphs, building up from individual nodes to agent graphs to combined composite graphs which represent multi-agent systems.
To break it down:
- A node represents a fundamental operation (LLM inference, tool utilization, function call etc)
- An agent – conceptualized as a graph – consists of multiple nodes
- A “swarm” – consists of the “composite graph” of interconnected agents
- And the “edges” within an agent define its “execution topology” while the edges between agents “establish collaboration and communication” between them

GPTSwarm uses automatic graph optimizers to:
- Refine node level LLM prompts within the agent itself while
- Improving agent orchestration by changing graph connectivity along the edges
The outcomes have been encouraging based on performance vs. other leading models on the GAIA benchmark.

Again, the key here is the “self-learning component” but this time at the “meta” system level where graph connectivity can self-improve as a task is being solved. As we learned last month in our segment on GNNs, this graph-based approach is well-suited to blockchain environments.
Combining tool utilization, self-learning, and system level orchestration is encouraging for integrating LLMs into complex workflows. However, the DeAI stack is much further behind its centralized counterparts in this domain.
Structural Reform
The language model is just one tool in a broader system. How we integrate and orchestrate these components matters a great deal. In fact, Sequoia now puts workflows above unique data assets as a primary source of competitive long term moats in AI:
“’The data moats’ are on shaky ground: the data that application companies generate does not create an insurmountable moat, and the next generations of foundational models may very well obliterate any data moats that startups generate. Rather, workflows and user networks seem to be creating more durable sources of competitive advantage”
Source: Sequoia: Generative AI Act II
However, building distributed workflows is difficult based on the current architecture of the web which enables scaling within organizations and platforms but not between them. In many ways, AI in the cloud is limited in cost, data privacy, and interoperability. Performant agents and personal assistants require access to data from multiple locations – from messaging apps, to email, to calendars, to social networks, to work documents, to finances – which limits the capabilities of all but the largest ecosystems*.* The current race appears to be Google, Apple and META on the consumer side and MSFT vs. Google on the enterprise side.
Arguably, no one has thought deeper about this dilemma than @Richardblythman and the team over at Naptha.ai:
“We believe that to realize AIs full potential, the architecture of the Web needs to be fundamentally re-thought from first principles”… “Decentralized workflows can run on one or more nodes (rather than on one central server), with different LLMs, and with many local data sources, opening up new use cases for AI developers.”
However, the centralized AI stack has been booming with new developer tools and application frameworks…

While in DeAI, just deploying, operating, and maintaining AI services takes a lot of effort as “entire categories of infrastructure are missing”:

Tools like Kubeflow Pipelines and Temporal for centralized orchestration of compute, data, and micro services have been crucial to scaled web2 apps but require a centralized scheduler and many existing workflow tools are similarly limited in their lack of modularity across nodes.
Naptha’s vision is to solve these issues by creating a “framework and infrastructure for developing and running massive multi-agent systems across many devices” deploying and orchestrating apps, workflows, services and data using decentralized infrastructure.
- Nodes manage task execution, user verification, and storage operations
- The Naptha Hub handles user authentication and data management and provides methods for listing and managing nodes, modules, tasks, and proposals

Source: Naptha AI docs
While this is a tall order, Naptha is far from alone in its effort to build distributed systems of orchestration and tooling for DeAI developers: https://www.topology.vc/deai-map
Admittedly though, in our current paradigm of data siloes, bootstrapping this vision is a tall order. Most large data repositories still sit behind walled gardens. In the age of zero cost marginal intelligence, data is now recognized as one of a company’s most valuable assets. The incentives to open these vaults to an shared data layer seems are approximately… zero.
However, they might not have a choice.
Web Proofs
Web Proofs or the questionable moniker “zkTLS” have emerged recently as a promising tool for breaking down web2 data siloes and porting historical and reputational data in pursuit of alternative, less extractive networks.
The potential ramifications are big.
Web2 platforms have dominated the internet era through network effects: subsidizing demand to attract supply – aggregating content providers (TikTok, Facebook, WeChat) or drivers (Uber, DoorDash, Grab) or merchants (Amazon, Alibaba, Flipkart) – in a winner-take-most flywheel.

At inflection, investors pile into the market winner with cheap capital, subsidizing the network liquidity and giving the aggregator a privileged position to begin marching up take rates on captive users and merchants.

zkTLS aims to reduce supply side lock-in by enabling users to port reputational, behavioral, and other data points in a verifiable manner.
It’s a squishy and imperfect comparison, but there are parallels to what India was able to pull off with Aadhar and UPI. By digitizing identity and prying open payment rails from the oligopolistic clutches of the tech barons, the Indian government – through foresight, smart regulation, and technology – erected an open identity and payments layer to promote exchange while limiting the extraction from any one aggregator. The results speak for themselves:

Web Proofs have the potential to provide a similar, “always open” data layer on which alternative networks can emerge. Uber drivers could port their 5 star ratings and 500+ rides to other ride sharing platforms. AirBnB hosts could do likewise. Small merchants could list their inventory both on and off platform – looking to cut out the middleman’s increasing take rates.
On the demand side, users can prove their loyalty (I listened to Taylor Swift >50,000x on Spotify), their purchasing power (>$100,000 bank balance), their influence (>100,000 twitter followers), their participation (XYZ behaviors in web2 / web3 game), rolling up into a rich aggregated data layer.
It’s kind of like an oracle network, but for “private” feeds as opposed to public data sets. An API which web2 giants cannot turn off. The really exciting prospect is how, once ported, these overlapping networks could lead to something more powerful than any one network: a network of networks.
Such a data layer would have ramifications for marketplace take-rates (think vampire attacks from web2 competitors or web3 networks), but more importantly, building intelligences and services based on those open data sets to better compete with the closed tech ecosystems more broadly.
This might sound too good to be true, but it’s actually based on a common technology which powers 95% of internet connections today.
For a deep dive into how it works and applications leveraging it, please check out Jordan’s latest report (where the graphics below originated). Below is a mere appetizer, combining thoughts from him as well as Madhavan Malolan.
How does it work?

The technology is based on a Transport Layer Security (TLS) handshake which occurs between the client and the server (the “s” in HTTPS which helps to secure much of the internet today). To simplify, the “handshake” between the browser and the website is like a private tunnel based on a secret key which encrypts and decrypts requests and responses between the browser and the website and is destroyed immediately after completion.
Web proofs add a new wrinkle to the above exchange by incorporating a verification method to prove certain, select information came from the server in question in a manner on which outside parties can rely.
Like with inference, there are multiple technologies which can be used to assist in verification so “zkTLS” is a bit of a misnomer as TEEs and MPC can also be used. So far, there are three primary approaches:

- TEEs use the familiar tamper proof hardware to provide an attestation that the correct TLS Handshake from the website in question was completed (Clique is an example)
- MPC breaks the secret key into multiple shards. Instead of the browser immediately responding, it consults a network of nodes as opposed to generating its own key which is then used to complete the handshake on behalf of the browser. If the outside party believes you have not bribed or coordinated with the other nodes, they should believe the verified output (TLS Notary is an example).
- The Proxy Model inserts a “forwarding intermediary” into the exchange flow which provides attestation of the encrypted requests and responses between the browser and the website. The browser then creates a zk proof of the response decryption that can be shared with the third party. In effect saying: “here’s the encrypted data and here’s a proof from the proxy that it was sent by the website, and here’s the zk proof that I have the shared key that decrypts it, and here is that decryption.” (Reclaim protocol – one of the top players in the space – is an example of the proxy model)
While still early days, as Shlok Khemani points out in this excellent post, we are already seeing exciting utilization of web proofs in the wild:
- Teleport uses zkTLS to allow drivers to port Uber ratings
- Nosh uses Opacity Network to power its food delivery competitor
- zkPass uses MPC TLS to port tradfi credit scores on-chain
- Equal is a SaaS app powered by the Reclaim Protocol to check addresses from Swiggy orders vs. requiring ID uploads
Many on-chain use cases today are held back by a lack of data / verified identity information and building up that reputation from scratch is a classic chicken and egg problem.
Web proofs are extremely early, and frankly might not capture that much value themselves. However, they may prove instrumental in breaking down the data siloes, paving the way for the golden era of applications, released from excessive platform value extraction.
Fat Applications
For crypto to remain relevant, the fat protocol thesis will have to die. As with other industries, investments in infrastructure will need to see returns creep up the stack over time.

Like cloud, both crypto and AI will need to see an ROI on infra outlays via greater value capture at the application layer to justify the valuations of the underlying platforms themselves. If not, the platforms will shrink to accommodate the demand shortfall as speculation wears thin.
The AI applications will come. Software development will approach the cost of electricity, leading to its commodification and an explosion of applications.
Again though, most of these applications do not explicitly need crypto. Even in web2, the application set is still hazy. We have chat bots, co-pilots, and enterprise productivity tools, but we are barely scratching the surface.

Source: Sequoia: Generative AI’s Act o1
Outlining the killer applications uniquely enabled by crypto and AI working in tandem is more difficult still. If the below segment feels more speculative or forward leaning than earlier reports, that’s because it is. The infrastructure is still coming together, and the experiments at the application layer are only just beginning – both in web2 and in web3.
Below is a sampling of AI application categories which are sizable and potentially superior because they leverage crypto’s unique properties. However, my imagination is bounded in a way the collision of intelligence too cheap to meter with the creativity of eight billion minds is not.
The only thing we know for sure is that reality will be much weirder than anything laid out below.
Prompt Translators: Indexing the Internet of Value
Perhaps the most privileged position in crypto will be the party mainstream users trust to translate natural language prompts into on-chain actions. Most crypto applications are fundamentally about forging novel networks which only generate real value past a certain liquidity threshold, after which the network can inflect. To date, crypto’s adoption has been crippled by a combination of 1) regulatory hostility 2) the mobileOS duopoly and 3) what can only be described as end user UX masochism emerging from a fetish with niche technical infrastructure.
By allowing end users to interact with on-chain environments using natural language, mappers have the opportunity to upgrade crypto UX and serve as de facto king pins in the race for “chain abstraction”.

Source: Kaito AI
Challenges
However, serving as this trusted liaison between user intent and on-chain execution is fraught with complexity.
- Black box: The LLMs which power most agents today are largely black boxes, prone to hallucinations, and unable to handle vague instructions making them suspect for high-stakes / high value transactions.
- Private key mgmt: Even if 100% dependable, securely providing autonomous agents with private keys is a complex endeavor
- Relevant context: Furthermore, agents often lack the context or even the native ability to navigate and transact on-chain
- Limited tooling: as discussed earlier, the tooling for web3 devs keen to integrate AI into on-chain applications is still nascent
A few projects which are taking on these challenges, at least partially, are Wayfinder, Brian Knows, Aperture Finance, Fungi Agent, and SphereOne. We touch on a couple of the approaches below:
WayFinder
Just as Apple and Google are the leading contenders to serve as users’ trusted on-device partner, routing queries to applications or off-device intelligences as needed, Wayfinder appears off to an early lead as the web3 “translator of choice”, mapping user intents into on-chain transactions.
Originally developed as a core piece of infrastructure for “Parallel Colony”, a blockchain-based simulation game from Parallel Studios, Wayfinder is now abstracting the complexities of navigating within and across blockchain ecosystems through its network of “wayfinding paths” which form the edges and nodes of a larger “knowledge graph” of smart contracts and execution routes.
Nodes in the Wayfinder graph are protocols, contracts, contract standards, assets, functions, and routines. Representing each function as a separate, callable entity allows shells (agents) to interact directly with blockchain contracts, solving one of the core challenges.
The Wayfinder Stack:

Source: Wayfinder Whitepaper
Just as Google indexed the internet to make navigating the web accessible, Wayfinder aims to index the internet of value, routing NLP queries to the most effective execution destination. Users no longer need to navigate the complexities of bridges and cross-chain swaps and chain fragmentation but can outsource those decisions to the collective intelligence of the network.
Wayfinder incentivizes community members with token rewards for submitting “new paths” to the network library. Submitters must stake tokens to combat nefarious activity, but once verified, submitters can share in the transaction fees of users who follow that particular pathway.
Users can invest in their own “shells” (agents), building specific skills and incorporating elements like price data, pinning specific memories, uploading knowledge feeds and more. Shell construction uses a graphical user interface, easily navigable by “normies” to encourage customization and integrates with frameworks like Langchain, Llama Index, and Autogen for tool utilization.
The agents also have “memories” spanning shorter term context and tool use for solving specific tasks but also longer-term cumulative memories. At a meta level, agents can remember and learn from both their own experiences and that of other agents in the network, compounding the rate of learning across the ecosystem.
The goal is to enable shells to undertake more complex actions over time: from trading to dollar-cost-averaging to contingent orders to scheduled tasks to airdrop farming and yield optimization schemes across all major ecosystems, beginning with Solana, Ethereum, Base and Cosmos.
Wayfinder is pioneering one approach: tapping the masses to map and index existing blockchain pathways. However, there are other approaches which abstract on-chain pathways entirely.
Solvers
Like Wayfinder, Brian Knows is targeting the NLP -> Routing niche both through a consumer facing app and developer APIs, using both its internal intent recognition engine but also routing through external solvers.

Source: Brian Knows Documentation
Clearly the future of crypto UX is NLP prompts settled on-chain but whether those intents are routed through on-chain pathways or external solver networks is an interesting trend to watch.
As my colleague Robbie points out in a recent report on crypto moats, solver networks which tap off-chain liquidity venues have been growing steadily in share across many markets:
Fragmenting Bridging Market Reflects Lack of Defensibility
Third-Party Bridging Market Structure by Volume
Agents which help abstract away chain fragmentation and a growing share of solver networks raises interesting questions around blockchain market structure. On the one hand, chain abstraction could very well lead to infra-layer commodification, and those closest to the end user – the wallets, the Tiktoks, the Telegrams of the world – will route through whichever set of nodes provide the best economics (often their own, a la TON). On the other hand, if blockchains are primarily used for settlement, more agents and solvers may opt for the most secure and decentralized settlement layer, leading to a growing power law behind the leader who optimizes for those properties.
I’m unsure how this will play out, but the arrival of agents will have significant implications for both UX and market structure on-chain.
SphereOne
Like Wayfinder, SphereOne is an earlier project which aims to become the chain abstraction aggregator of choice – bridging NLP commands and cross chain execution. However, the ultimate vision is perhaps more expansive – automating the creation of new pathways using developer co-pilots and agent libraries.
Blockchain development is plagued by fragmentation and interoperability challenges – with siloed wallets, interfaces, protocols, and SDKs. SphereOne aims to solve this by automating new chain integrations and multichain deployments using agents.

The system is orchestrated by the “user proxy agent” which executes the user’s intent with the help of multiple subsystems which authenticate and coordinate models and smart contracts to complete objectives. It is the brain which can pull in other tools as necessary – whether existing agents or specific tools or even real-time embeddings to help with code generation for creating new agents altogether.
The user proxy agent also provides a glimpse into autonomous payments and private key management, providing a single interface for self-custodial wallets across different chains, spinning up new wallets using MPC threshold signature schemes and decentralized key management systems like LIT.

Source: SphereOne Documentation
Instead of mapping existing pathways, SphereOne aims to provide developers with the tools to quickly build their own.
AI assisting in crypto and developer UX seems like an obvious win, but what about the inverse? Can crypto enhance the consumer AI experience more broadly?
Companions: Hope for the Incels
Most personal assistant / companion use cases have no need for crypto. Open models like Hermes from Nous which power chatbots like Venice already cover most censorship concerns, and frankly Big Tech has the context and distribution to offer compelling default solutions for most users around convenience and utility.
However, MyShell aims to differentiate its offering by leaning into the emerging trends of the attention / creator economy, the multi-modal LLM boom, and financial speculation.
If you believe, as I do, that the marginal cost of intelligence is heading to zero, then the traditional labor model underpinning our societies will undergo a transformation in the decades ahead. We are all capitalists now! Capital is the 99%, and labor is now the exclusive 1%: those elite few with the means of forging and harnessing the tools of abundance. The rest of us are the lowly tradooors, speculatoooors, and investooors – not necessarily “financial nihilists” per say, but “enlightened” in the simple realization that competing with data and compute for an hourly wage is increasingly financial suicide. Better to ride the Big Tech juggernaut, the adrenaline rush of the daily options market with a side dose of moo deng as the deflationary implosion of AI on services takes hold.
MyShell leans into these trends: building a decentralized AI first app store: marrying a model layer, a dev platform, and its own AI app store, fueled by crypto incentives.

MyShell’s mission is to build an open AI consumer layer, allowing everyone to own, build, and share their AI applications and agents while connecting open-source researchers, creators and consumers.
At the model layer, it has over 100 open source models and integrations into leading closed-source options. Incrementally, the widget center allows creators fun, easy to use tools – from foundational models to voice clones to stylized Stable Diffusion and GIF generators – to infuse their apps.
The AIpp Store
While going head to head with the GPT Store my seem like a fools errand, MyShell is betting the choice in underlying models, less stringent guard rails, and a greater cut to ecosystem participants will make for a more dynamic consumer app ecosystem, particularly for companion / entertainment-type use cases.

Crypto comes into play at the incentive layer. Recently released Patron Badge allows users to stake (i.e. speculate) on apps and earn points if those apps gain traction which can later be cashed in for a slice of app earnings.
Bringing financial gamification to early AI creator applications is a savvy move for virality. Like meme coins, many of these apps will prove flashes in the pan, but the platform as a whole can persist and thrive:

Source: Not Boring: Small Applications, Growing Protocols
Let 1000 flowers bloom, supercharged with the fertilizer of financial speculation and social virality.
Just did a quick check on stats for one of our open-source AI models:
– Over 30,000 users interact with it directly on MyShell
– 2.086 Million model downloads
It’s massive, serving ~3 Million consumers and developers, and that’s just one of our models.
How many people are…
— aiko/acc 🛠 🤖 (@0xAikoDai) August 21, 2024
Others: Nectar, Bittensor Subnet 11 “Dippy” and more
This intersection of creators, AI developers, consumers, and investors is compelling property to stake out, yet as with any large market, inevitably attracts fierce competition. Still, the plummeting price of these magical new tools appears likely to catalyze a golden age of user-created content. And in a world changing as rapidly as today, many users are keen to engage with familiar IP: to relive nostalgic characters from their childhood in novel, more dynamic formats.
To unlock the full potential of an AI consumer layer, we need a solution to the sticky problem of intellectual property.
Modular IP: Facilitating GenAIs Guttenberg Moment
As with other media formats, GenAI will inevitably transition from text to images to video. The concept of generating videos from texts emerged in late-2022 with open source models like CogVideo but broke into mainstream consciousness with OpenAI’s mindblowing Sora demo in early 2024.
Humans are visual, narrative creatures, and video is the highest bandwidth communication mechanism we have concocted to date. A picture is worth a thousand words, a video a million pictures.
After the “Sora” moment, Chinese Giants – pioneers in consumer internet – were fast to recognize the potential, releasing Kuaishou’s Kling, Bytedance’s Dreamina, and Sheng Shu’s Vidu not to mention Stability AI’s Stable Video and PixVerse v2 in the west.

Today, the cost of running video models is “significantly more expensive” than text and image equivalents, but, just like chat inference, the cost curves will descend rapidly.
We are on the precipice of a Cambrian explosion in creativity: a new renaissance where a Hollywood production studio is compressed down to cost levels attainable for solo creators.
I have friends in the entertainment industry from Hollywood to Suginami sitting on multi-generational IP catalogues and see this disruption coming. In the genAI era, no studio can keep pace with the creativity of the swarm. The only option for survival is to license the IP broadly, and let the internet do its thing – supplying near infinite derivations and curating the top 1%. TikTok, unbounded by reality, with millions of producers, infused with treasured IP.
And it’s not just the studios. According to McKinsey’s 2024 AI Report, more than 65% of enterprises have been using genAI regularly, sales and marketing the primary destination. Do you really think James over there in marketing or even the white shoe Mad Men agency can outcompete the internet in crafting a new viral ad campaign after the tools have been democratized?
However, compensating creators and IP holders for their role in crafting both the original and the derivative works is essential for this unlock. The alternative is one off licensing agreements with Big Tech, training data and derivative works locked into exclusive platforms. Big lame.
This is where story protocol comes in.
Story
Not a shocker that the team hails from Korea. When it comes to consumer internet and entertainment, the center of gravity has shifted towards East Asia in both content (KPOP, Anime) and formats (short video, live-streaming, gaming, digital gifting).
The essential problem Story aims to solve is making IP licensing scalable. Using the traditional legal system, one off IP infringement cases and licensing deals are expensive and time consuming, intolerable in a future where anyone with an internet connection may soon compete with Hollywood studios.

Story’s solution is to templatize contracts and bring them on-chain for more efficient, scalable exchange:
- Their “proof of creativity protocol” allows anyone to onramp IP
- Creators register IP assets – basically an NFT (either a on-chain asset or one that represents some off-chain IP)
- Then the IP assets can be wrapped in modules like licensing, royalty, and dispute resolution – dictating under what terms this IP asset can be used
Final dispute resolution still falls back to the existing legal system, but the infrastructure may help scale IP licensing dramatically, essential when the licensees may soon number in the millions. By bringing IP negotiations on-chain, Story has the potential to unlock a golden era of user driven video, advertising, and more…
Gaming & Virtual Reality
Gaming has always been at the forefront of new entertainment mediums, a pioneer in the shift from text to images to video and now, virtual reality. AI is likely to prove an essential enabler here with ultra performant inference chips like Groq accelerating video-model cost curves like Sora to fuel experiences on AR / VR headsets which are finally (fingers crossed) coming into their own. The combination is a future of zero marginal cost content:
“Machine learning generated content is just the next step beyond TikTok: instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models generate new content from content, at zero marginal cost. This is how the economics of the metaverse will ultimately make sense: virtual worlds needs virtual content created at virtually zero cost, fully customizable to the individual.”
–Stratechery: DALL-E, the Metaverse, and Zero Marginal Content
I had a semi-deprived upbringing when it came to gaming, largely limited to mainstream titles like FIFA, Madden, and Halo, so I tapped the experts (@freaz7 from our gaming team) for the download. While still early, the creativity on display is wild. Gaming is truly becoming one of the great art forms of the 21st century:
Elden Ring, experienced in its entirety, is the most beautiful art I have ever seen
— Elon Musk (@elonmusk) May 23, 2022
And we are barely scratching the surface of what an AI-laced future will entail:
- Games, like AI Arena, allow users to train an AI to mirror one’s playing style – how to position, when to jump, how to combo – and then unleash the creation to fight other trainees
- Others, like Virtuals Protocol, are infusing virtual worlds with autonomous agents using their GAME engine, starting with multi-agent interactive simulations in Roblox making the game truly unpredictable and open-ended
Presenting Project Westworld: The First Playable Autonomous World on Roblox
Our proprietary GAME engine/framework powers a multi-agent interactive simulation in Roblox called Project Westworld, where we are able to observe autonomous behaviour leading to emergent storylines.… pic.twitter.com/cx9RlEHt5X
— Virtuals Protocol (@virtuals_io) September 30, 2024
- Still others, like Parallel Colony, are building simulations where players can collaborate with their agents while progressing through the planet. Like the Unreal and Unity game engines pushing into film / design, Parallel provides yet another example of how technology underpinning virtual worlds can often be adapted to other use cases. Wayfinder (discussed above) is a spin out from the same team.
The continued blurring between the digital and the physical is a 21st century theme which will accelerate as AI makes virtual worlds more immersive and personalized. It is difficult to see how “real world” experiences will keep up. If we have learned anything over the last decade, value follows time and attention.
However, most of the examples above don’t require crypto. In fact, many gamers still view crypto as an unwelcome distraction to the gameplay. However, this strikes me as a limited view on what “gaming” is now and likely to become in the future. Many games already resemble complex economies in their own right filled with users, currencies, commodities, commerce, aesthetic assets and more, the scale and complexity of which will only grow with advances in internet access, bandwidth, and hardware. (see Delphi Report for the Deep Dive)
Worlds within worlds.
Before agents swallow the “real world,” they may first infiltrate these digital economies. Frictionless agent to agent exchange and interoperability to avoid lock-in are just as beneficial here as in meatspace. Like third-party developers or online merchants, virtual world participation often demands substantial investment – in both time and money – that users and agents would be keen to protect.
AI will accelerate the virtualization and gamification of the “real world” while also infusing digital worlds with ever more complex “real” economic activity. The blurred line may soon disappear altogether. There are eight billion humans, five billion internet users, and three billion gamers. The shift in time and expenditures – from bars to onlyfans, from physical coats to digital skins, from jewelry to NFTs – has been clear as the internet eats global GDP.
The DeAI thesis in gaming is therefore similar to the DeAI thesis broadly. The digital sphere is sucking ever more time, money, and energy from the world of atoms. It is increasingly where reputations are made, relationships formed, and goods exchanged.
And now these digital swarms, these collective hiveminds have reached a density where they increasingly exert influence over even the most powerful meatspace institutions, those weary giants of flesh and steel.
Increasingly, the power projection from one realm to the other, has reversed.
Prediction Markets: Revolt of the Public
Basking in the rays of a hotly contested 2024 election cycle, prediction markets are (finally) having their moment in the sun. Polymarket, the primary venue, now has US$1.8b in volume outstanding for the upcoming US presidential election and is regularly quoted on Boomer media channels.

The revolt of the public is underway: Markets > Pollsters. Plebeians > Experts. Hayek > Marx. The wisdom of the crowds resurgent.
For a prediction markets overview, please see my colleague Ben Sturisky’s Alpha Feed post or Michael Rinko’s recent deep dive (thanks for the slides, bro).
What I am primarily interested in here is the complementary nature of AI and crypto in this domain. As with the familiar anecdote regarding man-machine hybrid teams outperforming grand-master-destroying programs in chess, leveraging the wisdom of the market and synthetic intelligence simultaneously can lead to better forecasting, more efficient existing markets, and even novel markets and synthetic intelligences altogether.
The case for crypto in prediction markets has always been clear. Global, permissionless markets with cross-border rails and instant settlement.

Until recently though, the Achilles’ heel of prediction markets has been liquidity. Even today, most markets are too thin for the signal to be meaningful, nor worth the time, effort, and opportunity cost of capital for most participants with the relevant knowledge.
Vitalik covers the complement well: “AIs are willing to work for less than $1 per hour, and have the knowledge of an encyclopedia.” If that is not enough, they can now integrate with real time web search, call external tools, and perform chain-of-thought level reasoning. Like tradfi arb bots, many more markets can be seeded as agents “take the first stab,” providing signal on micro-level data points which were previously subject to heuristics and intuition.
For instance, my friend @zxt_tzx used to work in the Singapore government building a prediction markets platform which aimed to provide better forecasting data for government policy. Categories included :
- Politics: say the timing of Singapore’s next snap election
- Economics: predictions around unemployment and inflation
- Healthcare: predictions around the viral spread of COVID
- Transportation: The number of people using the MRT services in 2025
- etc
While liquidity for many of these use cases is clearly low, the signal provided could be material in informing policy and capital allocation.
The blend creates a powerful synthesis where Hayekian distributed consensus is combined with the accumulated history of knowledge penned on the internet (and increasingly broader, multi-modal data sets). A cyborg symbiosis of man and machine, market and planner, to make superior forecasts and novel data sets in previously non-existent markets.
But why stop at seeding markets? In addition to seed capital, LLMs could act as the creator of markets themselves (to reduce the possibility of loopholes for cleaner market outcomes) and arbitor (governance) and potentially build new predictive models off these data sets. **This appears to be the direction GenLayer in leaning with its “AI-powered smart contracts”: infusing smart-contracts with a consensus produced by multiple LLMs to bring non-deterministic reasoning on-chain, useful in prediction market arbitration, governance and more.
As Vitalik forecasts, proving this joint model out on a micro-scale can lead to interesting applications in:
- Financial markets: what will be the price of this token tomorrow? (see Allora / Robonet / BitTensor Subnets below)
- Fraud reduction: Is 0xSafu a safe contract? (see Rug AI and Meta Trust below)
- Governance: Is this social media post acceptable under terms of use? (See governance section)
In almost all use cases, we could depend an a single governance body or algorithm. However, a plural approach that combines inputs from the platform, multi-agent systems, and external human participants is likely to provide a richer signal.
And while this combination of synthetic intelligence and markets is likely to transform forecasting, the really interesting possibilities occur as prediction markets transition to decision markets.
The tools of 21st century governance are being born before our eyes. The marriage of direct democracy, markets, and artificial intelligence, harnessed to enable more efficient governance mechanisms.
One of the promises of crypto had always been novel governance institutions. Those promises have largely disappointed.
That may be changing.
Governance: Futarchy, Pluralism, and Post-Cog Coordination
The previous bright lines of the 20th century: autocracy + command and control economies vs. democracy + markets…

…have blurred somewhat. Many autocratic governments solicit citizen feedback and harness the market for economic growth. Market-driven corporates employ autocratic management hierarchies. Most democratic governments have large bureaucracies with strong market influences (lobbying / superPACs).
Today, the clean ideological divide of the 20th century has arguably shifted towards something more like:

Source: Plurality philosophy in an incredibly oversized nutshell
Yet, despite these political ideological shifts and the acceleration in technological tools, governance mechanisms have remained stagnant. While there is often some decentralized input (like voting in an election or board room) and many decisions are “data driven” (in the sense the managers or apparatchiks try to assess the relevant variables), a lot of critical decisions come down to a few flawed individuals, with imperfect information, “following their gut”.
This process seems unlikely to survive the combined onslaught of decision markets and artificial intelligence.
If the scaling laws hold, we may reach super human level intelligence within the decade. One million GPU superclusters crunching trillions of variables per second. Individual humans will be unable to compete. Our best chance to remain relevant is harnessing the collective wisdom of humanity.
@Mrinko’s latest deep dive starts off comparing the similarities between the latest SOTA AI models and markets. Both process trillions of variables in pursuit of the best information to inform decisions. The difference is one system is highly centralized (AI) and relies on machine computation while the other is highly decentralized, relying on collective human wetware.
In the famous quip from Peter Thiel: “Crypto is Libertarian. AI is Communist”.
However, to me, the question is less about either/or, but more about how we combine them to arrive at superior outcomes to either individually.
One example Vitalik highlighted was the potential for more democratic governance over systemically important AI systems: essentially on-chain DAOs which would govern the value-chain of machine intelligence.
Perhaps this could be applicable for decentralized training runs coordinated on-chain but does little to fix DAO’s poor governance track record, plus would remain inapplicable to all centralized efforts.
The more interesting angle is the concept of “Futarchy” of which MetaDAO on Solana is the leading example. Essentially, all major strategic calls are made via a tradeable “decision market” based on market participants betting whether the initiative in question would be accretive to the token price.
The decision making paradigm is fundamentally flipped: instead of waiting for Sundar to make a decision and then announce it on an earnings call two months later to see how the market responds, Google could tap the collective wisdom of global market participants (and synthetic participants) to make the best decision which solves for the target variable (STONK price go up).
Today, the question is whether this process can outcompete Sundar and the Alphabet C-Suite, but tomorrow the question will be whether this process can outcompete Google’s one million TPU cluster, potentially a much higher bar.
As the marginal cost of intelligence drops to zero and labor is subsumed by compute, these market based signals may end up as the best way to incorporate the input of broader swaths of humanity in the decision making process vs. leaving key decisions up to a single super-intelligence and the handful of elites with which they liaise.
In crypto, the clear place to start is protocol governance. DAOs have generally been a disaster: at best honest, but flawed attempts at collective decision-making; at worst, a multi-sig being drained by three dudes sitting on a beach in Uluwatu.
Decision markets like MetaDAO, Drift, and Azuro are looking to transform on-chain governance – shifting from token votes to a combination of market and AI driven decisions, incorporating not just a single model or human but multi-agent systems of bots and humans with differing, often competing perspectives. A plural synthesis.
While little more than on-chain experiments right now, successful implementations are likely to jump the chasm from crypto into traditional institutions.
The barriers between them might (finally) be breaking down.
De-ENSHITTIFICATION
“Enshittification” was first coined by Cory Doctorow in November 2022. From Wired:
“Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of how a platform allocates value, combined with the nature of a “two-sided market”, where a platform sits between buys and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between then”.
As discussed earlier, Jordan Yeak went clinical on enshittification by subsector – outlining the process in ride-sharing, food delivery, ticketing, clearing, loyalty programs, research markets, and gift cards. I would append some of the world’s largest markets to the list: online advertising, eCommerce aggregators, app store development and banking.
- How many ads are you seeing on the Google home page now?
- Is 30% really a “market rate” for app developers to pay?
- How quickly can Amazon spin up competing products to its “best third party sellers”?
- How much time and paper work is involved in taking out an SMB loan?
This is structural. This is aggregation theory. Many billions have been burned to squash competitive networks and ensure transactions stay “on platform” no matter how much the experience degrades.
Web Proofs – the enabler discussed earlier – may provide an exit. A potential end to the “great enshittification” via a mechanism to port verifiable data from walled gardens into an open data layer, reinjecting competition in service of the end user.
AI is little more than data and compute. With the right data, competitive and novel networks can be spun up quickly. We are seeing early offshoots emerging at the app layer, though traction remains limited. For example, one of the pioneers in ride sharing, Teleport has a live app “Trip” in Austin but currently only has 857 drivers and ~4000 riders. Nosh is similarly miniscule today relative to the food delivery giants.
One of the biggest items I will be watching is the ability for web proofs to gain traction on the demand side. The benefit for drivers or small merchants is obvious: the ability to port reputation or inventory to other platforms to avoid lock-in. The trick is onboarding the demand side in building liquidity.
Again, while still very early, agents may disrupt the familiar search -> app status quo. As opposed to a human with limited capacity to scan multiple platforms, an agent could serve as its own aggregator, evaluating inventory from multiple venues to service an intent based on cost-performance tradeoffs expressed by the end user.
“Hey Siri – I want a medium pepperoni pizza from dominoes, pizza hut, or papa johns. Please look for the fastest, cheapest option which can get here by 5pm”
Web Proofs may prove a bust if they cannot attract user demand. However, if traction picks up, almost any marketplace – particularly higher value and lower services touch- would have a much greater chance of disruption.
One of the more exciting unlocks could be in finance.
AiFi: Money at the Speed of Light
Vitalik recently flagged his concerns regarding the “ouroboros” nature of many DeFi applications.
This ruffled some feathers, but in my opinion is fundamentally true. Finance is not an ends in itself, but a tool to facilitate productive economic activity. If its just PVP trading and yield farming, what’s the point?
Lending
Perhaps the largest indictment of DeFi today is the lack of undercollateralized loan volume. While projects like Maple, Goldfinch and Credix have made strides, the approach feels “skeuomorphic,” tokenizing traditional lending businesses for slightly better liquidity.
The identity and reputational data points enabled by web proofs may unlock richer data profiles for automated on-chain underwriting and loan dispersal, propelling more non-trading-related activity on-chain.
The more data, the better the underwriting, the more loans, the greater the “money multiplier”, the more transactions, the richer on-chain data depositories become.
The end goal would be something like what Ant Financial pulled off in China – using wallets to effectively hijack payments from the existing financial system into its network, building the world’s largest financial database for surgical underwriting and loan distribution.

Source: Ant Financial S-1, pre-crackdown
The difference is this network would be open and composable, siphoning capital from tradfi into the new rails, soon to be infused with completely automated capital allocation.
Automated Liquidity Management / Trading
Bots have existed in traditional finance since the dawn of electronic trading. However, as the cost of software development drops to zero, we will see their utilization increase massively – not just from global hedge funds with co-located NASDAQ servers, but by retail investors.
The most interesting projects are those taking advantage of emerging primitives lower in the stack. Robonet is one such project, productizing models from Allora, discussed in DeAI III. Allora is a supply side model aggregator which improves over time by evaluating competing model performance, like Pond, in particular “topics” in a “context aware” way.
By leveraging these models across different topics, Robonet crafts DeFi agents – which users fund using vaults – across a range of use cases: prediction markets, memecoin trading, liquidity management, liquid restaking, DAO governance votes, lending agents, arb agents, the works.
Other projects like Compass Labs or RunLoop aim to provide traders with data sourcing, indexing, strategy development, backtesting, sophisticated simulation, monitoring and execution tools. Compass Labs even translates python functions into smart contract calls, opening the door for non-solidity developers to easily craft intelligences with in-built reward functions to continually improve in their respective trading venues.
The programs are increasingly writing themselves, and they will have direct access to permissionless trading venues. But this acceleration will require some guardrails.
Monitoring / Security
“Permissionless” development brings both innovation and opportunism. DeFi Llama lists the total volume hacked in crypto at US$8.7b.
As LLMs fuse with finance, the cat and mouse game of cyber offense and defense will spin at the speed of electrons themselves. Along with leading analytics plays like Nansen, Chainalysis and Gauntlet, a host of startups are emerging to provide comprehensive risk assessment tools across on-chain environments to project probabilities for rugs, fraud, and even beneficial ownership clusters.
Rug AI, MetaTrust and Test Machine are examples, providing risk scores across a range of factors:

More interesting is when these capabilities are integrated into risk management at the protocol level – likely infused with governance. For example, deciding whether to approve certain collateral-type and at what collateralization percentage in a dynamic, automated fashion based on the aggregated score.
Lastly, I have yet to come across a super compelling project which can automate the audit process (DM me if so!), but clearly that would do wonders for both security and the pace of deployment, not to mention a fantastic jumping off point into insurance. One of the largest bottlenecks to releasing new primitives has been the long, expensive process of auditing code pre launch. This seems like a domain the AI models themselves – trained on the entire corpus of on-chain contracts and historical attack vectors – could assist. Simulated environments in which pre-launch code is exposed to hostile models trained on a range of exploited and proven code bases could “harden” smart contracts, mimicking years of “live deployment” without the associated risks.
It’s fascinating and scary at the same time. As we have seen with HFT, hyper efficient micro capital allocation can lead to systemic macro risks as liquidity evaporates when it is most needed and correlations shoot to one. With AiFi, this will get supercharged by frictionless and mercenary capital, hopping between chains and protocols in search of the optimal risk-reward at the speed of light. It’s hard to say whether the volatility will dampen as users outsource their trading to more “rational agents” or the leveraging and deleveraging cycles only become larger and faster: global capital formation and evaporation in the age of agents, social media, and memes. The latter seems more likely.
And yet, other questions are more interesting still. As agent capabilities continue to increase and higher order goals become the norm: (i.e. the extreme version “take this US$5000 USDC and make me money on the internet”) will the process of capital allocation and the types of projects which get funded fundamentally change?
Will companies and protocols become more fully agentic in nature? Full-on, system-level orchestrators of which human involvement is a mere subcomponent?
DeAI Agents: A Real Contender vs Big Tech?
Very few projects have the audacity to say they are taking Big Tech head-on in the race for AGI / ASI. Most take a piecemeal approach: attacking one primitive in a composable chain which, together, forms a network of hundreds of interlocking protocols to take on the giants.
However, the largest project in the DeAI agents space aims to do just that. The “Artificial Superintelligence Alliance” ($FET, formerly known as Fetch.ai which has merged with Ocean Protocol and SingularityNET), currently valued at ~US$4b FDV, appears to be taking a hybrid approach – not as centralized as Big Tech – but more vertically integrated than the fragmented DeAI mosaic painted in this series.
To be candid, I came into this section with a surface-level negative bias for a few reasons:
- Token merger: merging networks and communities is always more complicated than company M&A which itself is fraught with over-promising and under-delivering
- Traction: Despite all three projects being around since 2017, the traction relative to what was promised fell short in my view. I’m not sure why vertically integrating would solve this.
- Nomenclature: Rebranding to “Artificial SuperIntelligence Alliance” and various subcomponents titled “the AI engine” and “AgentVerse” and “SingularityNET” just always felt kind of ugh and made my saliva taste weird.
That being said, the logic of the merger appears sound, bringing together complementary capabilities from across the stack:
- Fetch.ai brings a framework for delivering commercial solutions based on decentralized agent systems
- Ocean brings a decentralized network and marketplace for data management
- SingularityNET: “brings a concrete technical path to AGI and ASI, going beyond what Big Tech R&D teams are doing, integrated with a decentralized tech stack including an agents framework, a ledgerless blockchain…and a decentralized compute fabric”
By merging the commercialization frameworks and layer 1 of Fetch with the model capabilities of SingularityNET and the data management layer of Ocean, the hope is the combined entity will have more talent, scale, and capital to go after their ambitious roadmap.

In my opinion, the roadmap feels backwards based on today’s leading architectures: the scaling laws dictating that AGI, let alone ASI, will require OOMs greater compute. I might have gone with a slightly different ordering:
- Show Apps, Unify Stack
- Scale Compute
- Build ASI
But I digress…
The investment thesis here is similar to investing in an AI lab but one prioritizing decentralized approaches to AGI/ASI. The amount one should pay is dependent on the team, a belief in their specific approaches, their ability to commercialize those approaches, and demonstrated user, developer, and revenue traction. Much of this hinges on #1: whether you, as an investor, believe in their specific approaches to building artificial general intelligence relative to competitors.
The road map from their vision paper targets four separate approaches:
- STREAM #1: “LLMs”. Based on the capital requirements alone, this approach is an uphill battle. Fetch spent $100m on GPUs, a far cry from the $200b in hyperscalers’ annual capex spend. Admittedly, the emphasis is on minimizing hallucinations, decentralized training and inference breakthroughs, and data provenance – so not exactly going head-to-head re-scaling. However, current architectures seem to favor highly capitalized, centralized approaches.
- STREAM #2: “Neural-Symbolic Evolutionary Approach” which, in layman’s terms, appears to be a bet on decentralized networks providing a richer architecture for combining diverse AI tools into subnetworks to carry out useful functions
- The paper mentions SNET’s latest OpenCog framework “Hyperon” which acts as a “metagraph” where multiple algorithms work together to solve problems and achieve system level goals, which sounds similar to our modular compute thesis from part III, and the graph-based self-learning systems discussed above. However, there is a lot of lingo, and, as a non-expert, I’m not going to pretend I understand its merits compared to other approaches, but it makes me nervous this appears to be the most differentiated approach on the roadmap, and the primary architect had previously exaggerated AI capabilities in an earlier project.
- STREAM #3: “World-World Models” which are building on the Ocean Predictoor model – similar in nature the approaches by Allora or Numerai which use models to predict on-going time series like price feeds. However, the ambition is to push into much broader data sets over time. The end goal is to combine feeds to “continually ingest the physics of the world to predict the next state. It will be a world model on ground-truth physics.” This seems pretty far out and would likely require massive amounts of compute to come to fruition
- STREAM #4: “Emergent AI from Agent Networks” which is fairly self explanatory – routing tasks through networks of agents as the most compute efficient path to AGI.
To me, all four approaches appear likely to either underperform Big Tech (#1 and possibly #3) or competitive with the broader modular DeAI thesis (#2, #4 and possibly #3). To me, the quasi-decentralized, quasi-vertically integrated approach may end up caught in the middle of the two extremes:

Until the scaling laws breakdown, my bias is towards a barbell investment approach focused on the tails. Big Tech as the higher probability, lower vol, yet more “priced in” bet on AGI swallowing labor. DeAI’s mosaic as the highest vol outperformer in a world of compute efficient networked intelligences.
I’m not a trader, and frankly larger, more liquid “umbrella” projects like TAO and FET could likely outperform over the next 12 months based on AI narrative momentum alone. However, to me, the combination of Meta + the Mosaic still feels like DeAI’s best shot at competing with Big Tech in its bid to host the agentic economy.
We are in inning 1: the first out of the starting blocks rarely makes it the distance. Before Google, there was Yahoo. Before Facebook, there was MySpace. Before Amazon, eBay.
I suspect most of the eventual winners have yet to launch a token or have even been conceived. Even on the more integrated side, several competitors – like GaiaNet and Spectral Labs – are in swift pursuit, providing decentralized infrastructure that enables anyone to create, deploy, scale and monetize their own AI agents.
GaiaNet Example Node:

Source: Gaia Litepaper
Others, OLAS and Theoriq, discussed in DeAI III, are also building the foundations for composable agentic marketplaces to scale full agentic services on-chain.
With DeAI, my bias is to lean into the modular extreme: a Darwinian battle of hyper-focused projects iterating on product, GTM, tokenomics and governance in their respective niche which, when combined, may just outcompete the integrated solutions. A network of networks which is permissionless, messy, and organic.
There are a lot of DeAI tokens coming to market over the next 12 months, and frankly, there is unlikely to be enough liquidity for the market to absorb them all. However, hopefully this series has provided a framework for evaluating projects as they emerge, allowing investors to pick themes and parts of the stack that have merit and can outperform. From routers to datawarehouses, coprocessors to agent interoperability, there are no shortage of attractive returns to be had for the discerning.
The agentic economy may prove to be the most lucrative investment theme of our generation. With the value of your labor going to zero, the train only leaves the station once.
While many specific projects remain overvalued relative to traction, I still believe the chances of crypto having a role to play are mispriced.
The Agentic Economy
In the late 19th century, US legal doctrine began granting corporations many of the same rights as natural persons. Today, that list includes property rights, the ability to enter into contracts, due process rights, equal protection under the law, freedom of speech, the right to sue and be sued in federal court and more.
Over the last two hundred years, these corporate structures have provided significant benefits in economic development: limited liability to investors and entrepreneurs, the pooling of capital and risk, specialized management, and liquid ownership markets. The combination enabled many of the large investments and economies of scale which underpinned production during the industrial revolution and today’s global economic juggernaut.
The industrial revolution provoked changes in societal organization – from farms to factories – which demanded institutional reform – from feudalism to markets – which, in turn, required new legal and governance arrangements to cope with, and capitalize on, these fundamental shifts.
Looking at our current trajectory, AI’s impact on the economy will be more meaningful: transforming society not in two centuries, but in as little as two decades.
Cyber Fund defines the AI economy as the “ability for agents to enter contractual relationships with people, businesses and each other, manage capital and leverage on-demand tooling, such as memory, humans-in-loop marketplaces, APIs or external capabilities in a form of software tools or data.”
From chain-of-thought reasoning to tool-use to self-learning multi-agent systems to agentic payments, many of these capabilities are emerging.
Humans will soon be joined by synthetic intelligences which rival and surpass our own: the culmination of our transition from an industrial society filled with humans and corporations to an agentic society populated with humans and corporations, but largely run by agents.
Our existing infrastructure and governance institutions are not equipped to navigate this upending of the status quo. By shoe-horning agents into existing legal structures and closed ecosystems, we will not only handicap our potential growth but likely accelerate the trends of the last decade: pushing more profits and influence towards the few large tech companies actively cornering the markets for data and compute and putting an unnecessary tax on the means of production.
The prize is no longer just software but all of services globally.

Source: Sequoia: GenAI’s Act vo1
As Hayek reminds us, “Economic control… is the control of the means for all our ends. And whoever has control of the means must also determine which ends are to be served.”
To me, interlacing AI with crypto networks offers the best chance for human organization to keep pace with this acceleration while avoiding techno-feudalism. Corporations were necessary to pool talent, capital and risk to harness and fuel the massive productivity gains of the industrial revolution. Similarly, the agentic age will require new infrastructure and organizational forms to exploit its full potential while retaining individual freedoms.
Ideology aside, crypto rails provide several practical benefits over status quo infrastructure and institutions in their bid to host the agentic economy:
- Payment UX: complex modular tasks may string together hundreds of micro agents globally – are API keys for all of them really the optimal path?
- Permissionless environment: Agents may rapidly outpace the number of humans on earth. Making them all open a Stripe account and pay 3% to transact strikes me as a high, unnecessary tax on innovation
- Interoperability: an open data layer (increasingly populated with web2 histories via web proofs) and composable compute hardware could do wonders for the pace of AI development relative to enclosed ecosystems
- Tokenization: faster settlement, automated and composable capital allocation, and potentially more diversified ownership of the new means of production
- Innovative Governance: looking across the globe, the biggest disparity in wealth and quality of life appears not to be geographic location or access to natural resources, but largely the quality of governing institutions. While we have picked the low hanging fruit, we should by no means consider ourselves at “The End of History.” From direct democracy to futarchy to quadratic voting based on our pluralistic identities, let 1000 flowers bloom.
- Transparency & Provenance: blockchains enable traceability of the data, training, and inference processes used to build the intelligences we will increasingly trust with our most important decisions while also allowing more effective compensation for those inputs
Whether these properties alone are enough to sway the agentic economy onto crypto rails remains to be seen. There are powerful interests keen on ensuring that does not happen. The market is certainly pricing in a low probability today.
However, these properties have been enough to attract a select few misfits. The early pioneers betting the future of AI is Open. That the modular vision of intelligence will win out in the end. That the new gospel should not be locked away by priests in the high keep. That everyday users – you and me – have a role to play in shaping the intelligence which has come to supplant us. That the messiness of the swarm – synthesizing inputs from billions of humans and agents alike – is a gamble worth taking as our species takes its last precarious steps towards the singularity.
We are standing at the near bank of the Rubicon.
Who will we let cast the die?
As part of our October DeAI Month, Delphi Digital has elected to unlock this entire four part series, so if you have enjoyed these reports, please share 🙂
Thanks to Ceteris for his feedback & the Ex Machina Crypto x AI Telegram chat for churning through many of the ideas / AI papers explored in this report.
Appreciate you guys taking the time to read.
@PonderingDurian, signing off