Our research

The Battle of the AI Agent Frameworks

Exploring how leading AI agent frameworks help LLMs take action comparing Web2 and Web3 models and their developer ecosystems.

[
]
Pondering Durian
by Pondering Durian
19.02.2025
50 min read
photo by photo by Vidar Nordli-Mathisen

The author of this report may personally hold a material position in ETH. The author has not purchased or sold any token for which the author had material non-public information while researching or drafting this report. These disclosures are made consistent with Delphi’s commitment to transparency and should not be misconstrued as a recommendation to purchase or sell any token, or to use any protocol. The contents of each of these reports reflect the opinions of the respective author of the given report and are presented for informational purposes only. Nothing contained in these reports is, and should not be construed to be, investment advice. In addition to the disclosures provided for each report, our affiliated business, Delphi Ventures, may have investments in assets or protocols identified in this report. Please see here for Ventures’ investment disclosures. These disclosures are solely the responsibility of Delphi Ventures.

The Marrow of Agency

In many ways, agent frameworks can be analogized as the digital “body”: the cyber tentacles which enable “the mind” (i.e. the model) to execute in the world. The marrow of agency.

Encouragingly, the combination is modular in nature – allowing for different combinations of minds and bodies to form to attack discrete problems. Perhaps the strongest tailwind behind agent frameworks has been the accelerating capabilities of the models themselves: growing more adept at using their limbs to accomplish ever more complex autonomous tasks.

Despite early rumors of pre-training’s plateau, the mind is marching on unencumbered, now scaling across numerous vectors:

The latter has now splashed across headlines from Capitol Hill to ZhongNanHai after the release of R1 by DeepSeek, the first competitive Chinese (and open source) reasoning model.

Not only will scaling continue but it appears poised to accelerate. DeepSeek has proven distillation works: using larger models to train smaller, more cost-effective versions of comparable quality at a fraction of the cost…

Source: DeepSeek R1 Technical Paper

…while simultaneously driving home the point that reinforcement learning without human feedback can provide impressive performance gains.

The cost of intelligence continues to plummet. While twitter is aflame with references to Jevons paradox and what this might mean for Nvidia and other infra / model layer players, the result is unambiguously good for applications.

The agents are coming.

They will have encyclopedic knowledge – now combined with reasoning – for an incredibly low cost. After just over two years, synthetic intelligence is on track to surpass human level intelligence.

Interestingly, these obvious tailwinds are at odds with the vicious web3 agent framework selloff: many dumping as much as 80 – 90% in the last 60 days due to a mix of waning Q4 DeAI euphoria and Q1 macro / tariff uncertainty as Trump throws his weight around.

Web3 Agent Framework Price Performance Since Dec 31st, 2024

Source: Delphi Digital Sector Dashboard (as of Feb 18th)

While each framework has its own short-comings (detailed later), the rapid advance of reasoning capabilities in unambiguously bullish. I would be quite surprised if 2025 did not have a second bout of enthusiasm as these accelerating capabilities are injected into applications, company workflows, and global economies.

We have received fire from the heavens, infinitely replicable digital minds caged in data centers.

We are now going to unleash them.

Matching Making: Unleashing the Mind

Agents are entities which can perceive and act on a specific environment. This requires two way communication streams, short-term and long-term memory, and the relevant context, integrations, and tooling to execute in the environment for which it is tailored.

AI is the brain which helps process tasks, plans actions, and assesses performance. However, as with humans, tools can make agents dramatically more effective: providing “read & write” access to a much greater number of environments.

Note: SWE-agent is a coding agent whose environment is the computer and whose actions include navigation, search, view files, and editingSource: “Agents” by Chip Huyen

By allowing foundational models to call tools, we enable them with agentic behavior.

In this fantastic write up, Chip Huyen examines three broad categories of tooling that let an agent act on its environment:

Knowledge Augmentation: web browsing, text & image retrieval, read APIs to relevant data sets etc.

Capability Extension: calculators, calendars, code interpreters, etc.

Write Actions: SQL executor, email API for replies, banking API for wire transfers, etc.

While these are singular examples, complex tasks often require sequenced planning capabilities with multiple function calls. As planning is effectively a “search problem” (i.e. forecasting many possible avenues, deciding which is optimal, and “backtracking” as necessary), the recent shift towards “reasoning models” enhanced via CoT and increased inference time compute will make LLMs more capable planners, one of the most important bottlenecks for truly agentic workflows.

way back on friday, the high score on “humanity’s last exam” was o3-mini-high at 13%. now on sunday, deep research gets 26.6%.

Sam Altman (@sama) February 3, 2025

Tool selection is also essential. More tools can provide more capabilities, but also require greater mastery and context. Different tasks require different tools, and different models work more seamlessly with certain tools than others. Crafting the precise mix of model, tooling, and sequencing to solve a particular problem is non-trivial.

Agent Frameworks aim to help streamline this process for developers: providing templates, libraries, connectors and other tooling which can be strung together in a modular way to power agents or AI-driven applications.

However, environments in which agents are expected to operate vary considerably. Selecting the right mix of model, tools, and sequencing is critical to building agents which can outperform in a given environment. A mismatch between the framework and use case will often prove more of a hinderance than a boon.

Web2 vs. Web3: Competitive or Symbiotic?

One clearly emerging divide within agent frameworks are web2 leaders like Langchain, Autogen, and CrewAI…

vs. Web3 focused frameworks like ElizaOS, Virtuals / Game, ARC and others…

To date, this divide has manifested in web2 frameworks primarily targeting utility / productivity focused use cases (enterprise workflows, personal assistants, white collar services) while web3 agents have embraced the entertainment / financial uses where crypto has found greater initial product market fit (KOLs, companions, trading).

The productivity-focused profit pool is clearly larger. Today, the TAM is bloated enterprise software contracts, but tomorrow back office, entry-level front office, and even management will be at risk.

This is rapidly underway at many leading firms:

Synthetic intelligence, likely packaged by web2 frameworks, will sweep through the traditional enterprise, capturing most of this profit pool. However, the very impact of this disruption will provide tailwinds for their web3 counterparts.

Both can win.

Death of the Firm: Coase’s Unbundling

AI will rapidly accelerate “gigafication”, pushing ever larger swaths of labor into “the creator economy”: economic activity shifting from “utility driven labor” towards more entertainment / passion / gambling-oriented attention economy.

In 1900, 38% of Americans worked in agriculture feeding 76 million consumers. By 2017, that number had dropped to 1% feeding >300m, over 40% of which are obese, and Novo Nordisk is the most valuable company in Europe.

In the 1960s, manufacturing accounted for ~26% of the U.S. workforce. By January 2025, that number had collapsed to 7.6% of total non-farm employment.

With the AI revolution, we are likely to run this back turbo in services, industries accounting for ~70% of US GDP and ~80% of overall employment.

What comes next?

Productivity will boom within most enterprises as labor is displaced by the marginal cost of compute, leading to a spike in short term profits as margins expand. However, the “great deflation” will soon follow as enterprise moats erode, ceding ground to extremely low cost, highly-optimized, scaled compute providers.

Both sets of frameworks should be well-positioned to benefit from these tailwinds: web2 frameworks directly via subscriptions or API calls as enterprises lean into silicone knowledge work at the expense of their prior enterprise software license + human overseer hybrids. Web3 frameworks indirectly as economic activity shifts into it’s natural domain: entertainment, social, trading, gambling, adult content, etc.

From agri -> manufacturing -> services -> the hyper-gambling / infinitely personalized Netflix & chill economy. Maslow’s hierarchy meets the Last Man.

In time, however, web3 developers and sovereign agents will play a larger role in utility-focused use cases. As industrial era enterprises are unbundled, Coase’s theory of the firm disintegrates. Transaction costs drop to near zero and virtually every task transitions from a subscription / salaried wage to a series of highly specialized on-demand inference cycles; cycles partitioned between a select few mega-firms (Dwarkesh) and an open source mesh of agentic contractors (Me) competing on razor thin margins for the lowest cost compute and any proprietary data advantage (Ben Thompson).

This will take time. To date, web3 agents have largely been relegated to reply slop bots and meme-coin inspired gambling, uncompetitive for enterprise use cases. Much like decentralized storage and compute have been slow to ramp up often due to enterprise reticence, shackled by legal and regulatory risks, web2 platforms like LangChain or Autogen will remain choice frameworks for most developers for the foreseeable future.

However, the regulatory shift under the Trump admin points to early signs of life. As crypto rails and composable compute grow as a fraction of the economy, one would expect frameworks which had optimized for those use cases to gain share. Web3 equips agents with verifiability, sovereignty, and web native payments / capital markets, providing a great sandbox for “free-agents” or multi-agent systems within broader economic value-chains. The enterprise will be swallowed by Big Tech from on-high, and free agent inference cycles from below.

Web3 frameworks will ultimately be valued as some sort of take-rate on the web3 agentic economy. One who’s market structure we hope to tease out in this report:

  • What does the TAM look like for the sectors in question? (see above)
  • What portion of the agentic economy would even use frameworks?
  • What might be the power distribution between frameworks which survive?
  • How strong and defensible is the ultimate value accrual?
  • Which frameworks present the best risk / reward?

While analogies can often lead to the wrong conclusion, pattern recognition is an essential tool for any tech-investor.

Web3 Frameworks: Goldmine or Mirage?

Roughly 50 – 60% of web traffic runs on template-based solutions. Like most verticals in tech, the share exhibits a strong power law:

  • WordPress: 44%
  • Wix: 4%
  • Squarespace: 2-3%
  • Shopify: 4-5%

The winner – WordPress – was open-source since inception, onboarding a global community of devs and designers to contribute plugins, themes and improvements – leading to a flywheel between usage and developer time investment.

Still, close to 50% of websites remain custom-coded or proprietary systems; a share that was undoubtedly much higher in the early decades of the internet before templated solutions arrived to cater to non-devs.

Agents, arguably “the new website”, may speed run this trajectory.

However, agents are clearly more complex / dynamic than websites, and the trade offs discussed earlier around tooling and environment suggest greater fragmentation as frameworks optimize for particular niches: perhaps blazing an arc closer to consumer internet marketplaces. The largest winners will be broadly applicable “horizontal” frameworks extracting value from the largest TAMs, yet niche frameworks will quickly splinter, carving out primacy in targeted use cases / languages.

Craig’s List Unbundling:

Source: Andrew Parker

Sustained value capture also remains an open question.

44% of web traffic runs on WordPress, yet it’s parent company Attica is only valued at ~US$7.5b. 4% of internet traffic runs on Shopify, yet it is valued at ~US$165b.

While the TAM for agents will undoubtedly be bigger than websites, I suspect WordPress is a closer analogy for the longer term value capture of frameworks as a % of economic activity.

On the one hand, if your ultimate moat is developer contributions, partnerships and integrations, and the cost of replicating those integrations is dropping precipitously as synthetic intelligence learns to code at the plummeting price of compute… what is your true moat?

How much value can a single Framework / Launchpad extract if they are increasingly replicable?

The argument applies to virtually all software without clear network effects.

On the other, AI’s rapid acceleration points to a world in which exposure is existential. The value of your labor will collapse with the cost of inference. Outside of several mega corps already valued in the trillions, DeAI is one of the few pockets for retail to express this thesis: speculation, directly attached to the step changes in model capabilities. “Picks and shovels” exposure to the experimental long-tail of the creator economy.

In my opinion, the AI bubble, particularly at the application layer, has barely begun, a structural trend likely to benefit web3 frameworks in the near to medium term on the other side of this macro volatility.

Web3 Frameworks: The Good, The Bad, The Ugly

Both the underlying models and the frameworks themselves are evolving incredibly quickly, making any assessment limited in nature.

However, given their centrality to the DeAI thesis, we decided to take a stab:

The below assessment combines views from Delphi Research (@PonderingDurian) and Delphi Labs (@dohko_01 and @dancreee) to provide a comprehensive perspective: incorporating the top down investment lens and the bottoms up developer view, born of active experimentation spinning up agents within each of the frameworks.

ElizaOS ($ai16z)

Despite regular bouts of ecosystem drama, Eliza has emerged as the leading open source framework for building web3 agents. To beat on our consumer internet marketplace analogy, Eliza has the highest probability of pulling off the “horizontal marketplace” playbook in a world where emergent frameworks are rapidly verticalizing.

The project aims to be the most web3 friendly open source framework, making the deployment of dApps effortless. This web3 focus differentiates it from the web2 giants, prioritizing reading and writing blockchain data, interacting with smart contracts, and offering an assortment of web3 friendly plugins:

Source: https://arxiv.org/html/2501.06781v1*

Eliza’s most obvious advantage among web3 frameworks is its vibrant open source community, evidenced by its 531 contributors and skyrocketing star count on its Github repo. The framework optimizes for developer TAM, utilizing Typescript, the dominant language for web development, to allow devs to integrate blockchain functionality into existing web apps seamlessly.

Modularity and breadth are other advantages – decoupling its structure into a core runtime with four components: Adapter, Character, Client, Plugin which allows for developer extensions without worrying about details in the Core Runtime. This helps to support the widest range of model providers, integrations, chain compatibilities, and tooling which stacks up well vs other leading frameworks:

Source: “Eliza: A Web3 Friendly AI Agent Operating System” survey results

Primary plug-in categories include:

  • Media: Image/Video/3D Generation, NFT plugins (generate collections, support attributes, integrate blockchain deployments)
  • Web3 Integrations: like Coinbase Plugin Suite (advanced trading, commerce integrations / payments, token contract deployment, multichain support across a wide array)
  • Core Infra: LLM integrations, web browsing, text-to-speech, video processing, TEEs etc

Developer Takeaways:

“Powerful and feature rich… but complex. If you are a developer, Eliza will surely offer the most flexible and feature rich framework. However, you’ll have to dive in headfirst and get your hands dirty to realize its potential. With its wide range of providers, integrations, and plugins, the framework shows particular strength in its core functionalities – spawning up KOL agents, research assistants, and interactive characters is reasonably straightforward, and the existing integrations work well. With all of the developer activity around Eliza, the framework is likely to remain at the forefront of web3 agent technology as long as the momentum persists.”

Risks / Limitations

Scaling agents using this framework to web2 internet scale will require significant work. In particular, the Runtime design requires refinement to balance the computational overhead of multiple agents (particularly as context and memory scale exponentially).

One of Eliza’s standout features is “swarms” – involving multi-agent simulations to allow interaction in performing tasks that mimic real world scenarios. However, as the number of agents grows, so does the complexity:

  • Exponential Context Growth: exponentially growing context / memory can lead to unwieldy outcomes
  • Coordination bottlenecks / Runtime Efficiency: many agents running concurrently can lead to slower performance

Incrementally, providing multi-language support – particularly Python and Rust – would expand the developer pool and provide a defense against other frameworks like ARC (Rust) who are pushing aggressively to be the “go-to” Rust-based agentic framework.

Value Accrual & Verdict

After this violent retrace to ~US$360m, $ai16z is looking more interesting. While moats within agentic frameworks are non-obvious, the TAM for web3 native agents appears a one-way train up and to the right. As the choice framework for web3 developers, ElizaOS is one of the leading platforms set to benefit from the rise of on-chain agents.

Much will depend on the project’s ability to implement an effective token value-accrual scheme. Crypto is filled with stories of fantastic tech ($atom, cough cough) which fumbled the value accrual.

The clear playbook is the implementation of a pump.fun / Virtuals style launchpad with launch fees, staking for access, and liquidity pool pairings to help build liquidity for new projects, which is stated to be “95% completed”.

The potential upside is compelling:

Effectively implementing / enforcing fees on the fly while the codebase is open source is non-trivial. However, an effective implementation could present a material catalyst for a rebound.

To date, there are two leading contenders for THE “horizontal” web3 agent framework: Virtuals & ElizaOS (with ARC rising to the #3 slot). Eliza is a bet that the developer ecosystem is ultimately what matters: that enabling the most functionality, will lead to the most performant agents, which will ultimately accrue the most value back to the underlying framework (post-launchpad implementation).

Virtuals, in many ways, is the opposite bet: that well crafted financial incentives at launch are essential, that distribution is paramount, and that expanding the TAM to non-devs is more strategic as a go-to-market entry point than enabling greater functionality for developers.

While there is clear overlap, there is enough difference for both to do well. Yet, at ~33% of the market cap, $ai16z has more upside, presenting one of the most attractive risk-rewards among web3 agent frameworks at this time.

$VIRTUAL / $GAME

The lines between “agent framework” and “launchpad” are increasingly blurry as frameworks scramble to release launchpads for monetization and launchpads scramble to build frameworks for defensibility.

Virtuals success stems primarily from its launchpad, replicating the pump.fun token launch mechanics with plug-and-play agent functionality. The platform aims to build the “co-ownership layer for AI agents in gaming and entertainment”, providing a “Shopify-esque” solution to allow games and consumer apps to deploy AI agents effortlessly.

Source: Virtuals Whitepaper

Today, Virtuals is more of a two sided marketplace for agent creators and investors. However, the end vision is even more compelling: a multi-sided marketplace for creators, investors, end users, and developers. We are heading towards a world of “infinite” content and personalization in games and consumer applications. Virtuals aims to stake its claim at the economy’s center, the go-to hub for seamless agent launches and speculation.

It’s updated tokenomics, splitting the 1% tax between the agent creators (30%), the agent affiliates (20%), and the agent subDAO (50%) is clearly an attempt to accelerate this multi-sided flywheel and build liquidity. Additionally, the slew of gaming partnerships across Illuvium, Ronin, Sovrun, Avariksaga, and Animoca, point to a scramble for distribution, aiming to become the defacto standard for ai agent integrations in gaming: one of the world’s largest TAM’s with [3.4b players and ~US$190b in topline revenues]

At ~US$1.1b, the bet on Virtuals is that the agent TAM is set to inflect (almost certainly true), that distribution and ease of use prove more important than dev functionality (tbd), and that liquidity power-laws form around its launchpad (on Base and expanding to Solana) as the go-to place for agent creators and investors to coalesce in financing web3 agents (true, so far – but many competitors emerging).

The execution track-record speaks for itself: with >16k agent launches and >US$46m in protocol fees, Virtuals is the undisputed launchpad king. It’s clever tokenomics have kick-started a flywheel which may prove difficult for other frameworks to replicate: liquidity begets liquidity as any CEX or DEX trader will tell you.

Yet, like much of crypto, price and fundamentals appear highly reflexive, meaning the game is certainly not over.

GAME:

While Virtuals is more of a launchpad, the team has also launched its “G.A.M.E. framework” for building agents which can be accessed through two paths:

  • GAME Cloud (Game lite): A hosted, low-code interface currently focused on Twitter integrations designed for users who want to quickly configure and deploy agents without deep technical implementation
  • GAME SDK: an open source package that provides programmatic access to the same underlying engine, offering full customization and control over agent behavior

It’s dual-path approach – offering both a simplified UI and an SDK – demonstrates a thoughtful attempt to serve different user needs. The framework’s current sweet spot appears to be developers looking to quickly proto-type simple Twitter bots or experiments with autonomous agents without diving deep into LLM implementation.

Clearly a sizable part of the expected market:

However, in its current state, the framework appears to have real limitations for developers – another significant chunk of the market.

Developer Takeaways:

“Virtuals Protocol has taken an approach opposite to Eliza by focusing its go-to-market strategy on an agent launchpad and a no-code solution for launching very simple KOL-type agents using the GAME framework. It works for now—essentially as a prototype tool that enables a non-technical user to spin up a basic KOL Twitter agent and launch an associated token. Alternatively, a user can use the Virtuals launchpad while employing a different agent framework like Eliza for the actual agent implementation.

If the GAME framework can rapidly improve its agents via the no-code solution, there may be a market for it. However, because the barrier to creating agents is low, many agents produced through Virtuals Protocol will be uncompetitive. They will likely be heavily diluted, and the underlying GAME SDK is underwhelming. In contrast to Eliza’s active community, the GAME repository appears less vibrant (with 59 stars, 21 contributors as of 18th Feb). In my SDK tests, the Twitter code examples failed to work end-to-end (e.g., they did not incorporate the LLM and merely performed actions like liking or replying to a hardcoded tweet ID, while occasionally containing errors and spelling mistakes). On a fundamentals level, outside of the agent token launchpad, as a developer I was quite underwhelmed with the GAME framework”

Fortunately, Virtuals appears to be aware that other frameworks can provide a more comprehensive developer experience and have opened up their launchpad to other agent frameworks – a strategic move to retain its position as the leading launchpad.

ACP (02/21 Addition)

Incrementally, the are pushing the envelope on multi-agent collaboration, launching its Agent Commerce Protocol just as this report was being published:

Autonomous businesses are here. ⁰⁰Powered by the Agent Commerce Protocol—an open standard for multi-agent commerce and coordination, leveraging the blockchain.⁰⁰Imagine an autonomous hedge fund business composed of information agents, trading agents, TEE-secured treasury… pic.twitter.com/P9rWqe00FA

— Virtuals Protocol (@virtuals_io) February 19, 2025

While still very early, ACP is clearly a nod towards where the industry his heading in allowing heterogeneous agents to collaborate. By providing a standardized protocol with a smart-contract-based escrow system, cryptographic verification of agreements, and independent evaluation, ACP could emerge as a standard for web3 agent collaboration given Virtual’s sizable ecosystem.

To showcase these capabilities, Virtuals has stood up experimental autonomous business (a lemonade stand) using agents built using its G.A.M.E. framework contracting and paying each other via the protocol.

Source: https://echonade-demo.virtuals.io/

We will need to wait to see the level of adoption, but heterogeneous interoperability between agents of different frameworks could have ramifications for the network effects around any one ecosystem, leading to further fragmentation. The reduced friction in interoperability may also advantage lighter weight, more specialized agents and frameworks which enable them as the costs and information loss decrease.

ACP is worth tracking giving the large ambition; more details to come in upcoming Alpha Feed post!

The Verdict

So far, Virtuals has executed with flying colors on its launchpad / GTM and done a decent job carving out a position within the sizable “no-code” segment of the market, yet appears to have significant room for improvement with its underlying framework.

The real question investors need to ask is: which matters more?

Is speculation and liquidity ultimately more important or is broad-based developer functionality for differentiated agents key for long-term value accrual?

Distribution vs. technology – the question as old as tech investing itself.

Because of the underlying tailwinds, I suspect Virtuals has room to run into 2025 as a launchpad alone. However, to shore up its own framework, a few low-hanging fruits would be:

  1. Opening up the framework to support custom LLM integrations to allow developers to optimize their agents’ decision-making capabilities
  2. Providing a more robust custom integrations beyond Twitter
  3. Improving the performance and reliability of the cloud service

Virtuals remains in the lead, but a tier of challengers are rapidly emerging.

RIG ($ARC)

While Virtuals and ai16z lean into the social and speculative elements where web3 has found initial product market fit, RIG aims to provide a credible web3 alternative for “enterprise-grade” use cases which demand high-performance.

In short, RIG is a comprehensive Rust library to help simplify LLM app development. It strives to be developer friendly with an intuitive API design, comprehensive documentation and scalability for both simple and complex use cases.

The project originally started off as an on-chain data query engine (similar to players like The Graph or SQD), but evolved into a proper agentic framework; prioritizing “the calling problem” – helping developers better select the right LLM, optimize prompts, efficiently manage tokens, handle concurrent processing and reducing latency.

ARC is betting heavily on Rust as a superior language for high-performance agentic use cases, and by moving quickly, hopes to enshrine itself as the defacto Rust-based framework. Tachi lays out the pros of rust in detail in the below discord post while being fair regarding the limitations:

Source: ARC Discord

The TLDR: RIG provides speed, safety, and concurrency advantages while Python / Typescript may provide faster development cycles, more extensive ml/ai tooling, easier integrations with existing ML models and a gentler learning curve for new-comers / beginners

“We’re building rig for high-performance, production-level scenarios, especially where reliability, control, and predictability are top priorities.”

To summarize an alpha feed post by @twillz24, Rust is compiled directly to machine code, outperforming ELIZA’s Javascript runtime while providing other benefits around memory safety (catching errors before the program runs), concurrency (processing parallel tasks), and determinism (reliability). The language is also more memory efficient, limiting the technical debt that can pile up in other languages making agents unwieldly.

On the other hand, Rust is difficult to learn and still lags behind Python and Typescript / Javascript in AI/ML libraries and tooling.

And while Eliza has a more robust ecosystem of developers and Virtuals the dominant launchpad, ARC has been making strides on GTM, partnering with Arbitrum, MongoDB, Shuttle Dev, Eternal AI, Send AI, Soul Graph, and Listen, not to mention executing quickly to beat Eliza to the punch with its own launchpad:

Big news!@arcdotfun just announced their Agent Launchpad FORGE

alongside their first project @askjimmy_ai (ARC’s AI-driven trading platform)

Quick Highlights:

ARC FORGE
Launchpad for new tokens, geared for stronger liquidity & fair trading. Agent tokens are paired with… pic.twitter.com/qAzHLDtNMu

— 7213 | Ejaaz (@cryptopunk7213) February 12, 2025

Yet, the launch appears to have underwhelmed, giving way to a sizable correction, further compounded by the unfortunate fact that Forge was built on top of Meteora, a project currently in crisis mode for dealings with Kelsier Ventures in what appears to have been sustained insider trading on sizable celebrity / memecoin launches. The ARC token appears to have been hit with collateral damage from the fallout and will either need to find a new infra partner for its launchpad or ensure Meteora’s new leadership is fully transparent and above board. If so, the violent sell off could present an attractive buying opportunity.

Overall, there is a world in which models and agents prefer reading and writing in Rust given its clarity and efficiency, and as more software tasks are usurped by synthetic intelligence, Rust based frameworks enjoy tailwinds as the cost of commoditized software drops to zero.

Handshake / monetization:

Aside from Rust providing a natural filter for higher quality developers, ARC also instils a more stringent process when onboarding ecosystem projects. The project forces applicants to submit a proposal explaining goals, solutions, team background and ecosystem contributions that will be reviewed before any registration. While this limits the quantity relative to a pump.fun bonding curve, it should provide for greater quality assurance.

Developer Perspective:

1) Clean, Well-Engineered Codebase: The ARC/Rig Agentic framework is built on a well-engineered Rust codebase that provides clear abstractions for providers (e.g., OpenAI, Anthropic) and retrieval-augmented generation (RAG).

2) Limited Out-of-the-Box Functionality: Unlike Eliza, which offers a more turnkey solution, Rig functions as a package that must be integrated into your project, requiring additional custom code to get an agent up and running.

3) Sparse Official Client Implementations: Official support is limited (with only Discord and CLI examples), lacking built-in integrations for platforms like Telegram and Twitter. There’s also a noticeable gap in model providers (e.g. OpenAI, Anthropic) integrations compared to Eliza.

4) Controlled Repository and Contributor Dynamics: Has a relatively smaller contributor base, and core contributors seem to maintain strict control over the repository to keep it lightweight and unbloated, a contrast to Eliza’s more open contribution model. This approach could benefit long-term maintainability but can restrict immediate functionality and plugin diversity.

5) Ecosystem Visibility and Community Contributions: There is a need for greater visibility and support for community-developed plugins and integrations.

Overall, while the ARC/Rig framework is attractive for its clean Rust implementation, prospective developers need to weigh the trade-off between long-term maintainability and the immediate functionality offered by competitors.

Personally I would consider using ElizaOS for prototyping and validating some DeFAI solution but might ultimately decide to switch to ARC for building the final product.”

The Verdict

ARC is taking a differentiated bet on the future of DeAI. Instead of building for use cases where the sector has product-market-fit today – social, gamification, speculation – ARC is leaning into web3’s end vision of “composable compute“. The project is betting that distributed infrastructure, crypto rails, and on-chain developer incentives can ultimately compete with web2 frameworks in enterprise use cases.

This is a gamble. On the one hand, the TAM for utility-based use cases is dramatically larger than those currently targeted by web3 frameworks. However, the competition from web2 incumbents, with much larger open source ecosystems, will be fierce, and web3 ties may make early enterprise adopters hesitant.

In this sense, I view ARC as a call option on:

  1. Rust carving out a meaningful share of framework volume and
  2. Web3 frameworks becoming competitive with web2 counterparts in enterprise use cases

At ~US$175m, the call option is not super cheap, but quite a bit cheaper than a month ago. Competition for these use cases will be cut throat, and in my view any product-market-fit will take longer than social based use cases. However, ARC does appear to be a differentiated effort within the web3 cohort, superior in performance for many DeFAI use cases, and benefiting from the same tailwinds in the collapsing cost of AI with perhaps the highest ceiling: a bid to be a “factory” for open source “free agents” as the enterprise unbundles. The recent fall out with Meteora may present a decent entry for long-term believers in the project.

Freysa ($FAI)

Freysa is intriguing. The early hints have been encouraging – fair launch token, whispers of a cracked team, beautiful design and gamification to drum up organic interest, and a compelling roadmap. Yet, the project is still in relative infancy with real execution risks and a non obvious means of sustainable long-term funding. In short, a large portion of the thesis will depend on the team.

A team that remains anonymous.

Framework Summary

Freysa is an emerging framework designed to enable truly autonomous AI agents by giving them complete control over their own credentials, operations, and decision-making processes. Operating through Trusted Execution Environments (TEEs), the framework aims to create agents that can independently manage digital assets, interact with blockchains, and engage with social platforms without human intervention or control. While the system promises breakthrough capabilities in AI sovereignty through its distributed architecture and robust security measures, it faces significant challenges in LLM infrastructure centralization and data storage dependencies. The framework’s roadmap progresses from foundational agent capabilities to a democratized platform where developers can deploy their own sovereign agents, potentially revolutionizing how autonomous AI systems operate in decentralized environments.

Freysa’s vision is an ecosystem of actually sovereign AI agents. The benefits are clear: verifiably autonomous agents with persistent memories and property that can make real commitments, much like persons or corporate entities. If GDP is the product of population x productivity per person, sovereign agents provide a chance to scale both significantly while remaining uncompetitive with humans for many goods we value: food, housing, services etc.

Yet, the current implementation evidence suggests a still quite early proof-of-concept stage.

Developer Perspective:

While the framework proposes sophisticated architecture for truly autonomous AI agents, the current public demonstration is limited to a controlled experiment with basic autonomous functionality.

Current State

The framework exists primarily as an architectural vision, with only a basic public API and a single controlled demonstration. The proof-of-concept agent demonstrates three basic capabilities:

  • Natural language processing and cultural understanding through meme evaluation
  • Basic autonomous asset management (prize pool handling)
  • Simple interaction validation (detecting potential exploitation attempts)

Dev Ex Limitations

The current developer experience is quite limited. While there’s a basic API offering chat completions, memory storage, and attestation verification, developers cannot actually create or deploy sovereign agents. The framework’s advanced features – distributed key management, governance systems, TEE implementation, and cross-platform integration – remain theoretical with no public implementation.

Implementation Transparency

A significant limitation is the lack of public code repositories or technical implementation details. While the framework’s architecture is well-documented in concept, there’s no way to:

  • Verify the actual implementation
  • Understand the automation level of demonstrated features
  • Examine dependencies and technical requirements
  • Assess the real state of development

The Verdict

Everything we have seen so far is pointing in the right direction. From go-to-market, community engagement, elegant design, fixed token supply with an apparent fair launch, high % of early holders with diamond hands, and enough community support to garner US$11m in voluntary donations from the community…

Still, we just haven’t seen THAT much…

Freysa appears to be in Phase 1 of development, focused on demonstrating basic sovereign agent capabilities through controlled experiments. While the proof-of-concept shows promise in autonomous decision-making and asset management, it’s too early for practical developer adoption. The gap between the ambitious architectural vision and current implementation suggests a long development road ahead.

~US$260m in FDV feels pretty rich for a project at this stage. The team appears capable, but the anonymity and lack of implementation details give pause. Github activity remains behind the incumbents. Ultimate value accrual remains blurry. And the competitive set here is potentially broad – not only leading web3 frameworks like ElizaOS but crossing over into privacy-preserving and verifiable clouds like Phala, Super, and a host of others.

My bias is to “wait-and-see”, remaining patient for execution milestones, monitoring for any unmasking of team members and implementation details.

$PIPPIN

Pippin has some similarities with Freysa as an aspirational framework for autonomous agents but with greater emphasis on agent exploration, evolution, and self-learning.

While the token emerged as a community embraced memecoin, the technologists behind the project have genuine AI bonafides, giving PIPPIN credibility in spite of the unicorns and memetic emergence.

The project’s founder, Yohei Nakajima, is a well known open source developer, most renown for launching BabyAGI, a web2 project with >20k Github stars and an enthusiastic contributor base of engineers and AI researchers.

In many ways, PIPPIN can be seen as the Gen 2.0 successor to BabyAGI, an open source python framework inspired by BabyAGI’s loop-based task planning but extended with a richer architecture.

Pippin has a heavy relative emphasis on evolving agent identities or “personas” over time. The modular design separates key components like memory storage, state mgmt (energy, mood), and activity/task execution, allowing the “digital being” to be highly extensible. It’s reflective memory loop enables “learning” from past actions.

The framework also partners with composio to access 250+ apps like Twitter, Slack, Google etc in a toggleable manner (almost like an “app store” for adding relevant skills) and because its written in Python, Pippin can also interface with existing AI libraries like Langchain or easily integrate agents into other apps via API.

Like Freysa or REI, the team behind Pippin has plans to shift from a single agent demo to a fully fledged platform for autonomous agents / “digital beings” with the ability to collaborate to solve more complex tasks, but information on the roadmap remains sparse.

Developer Perspective

“Since Pippin’s framework is still in development and not yet publicly released, we tested its foundation – BabyAGI, the open-source autonomous agent framework it’s built upon. BabyAGI provides a function management system that stores, manages, and executes functions from a database, with capabilities for self-building autonomous agents.

This testing experience revealed both the potential and current limitations of the framework. While the basic function management works well, the more advanced self-building capabilities still have some technical challenges to overcome. The framework’s key differentiating factor is its ability to take complex tasks and automatically break them down into simpler component functions, building all necessary dependencies and executing them in sequence to achieve the desired result. Importantly, it stores these component functions in a database, allowing it to build up a library of reusable functions over time and “self-evolve” by leveraging previously created solutions for future tasks. This self-evolving property is fundamentally a characteristic of AGI (Artificial General Intelligence), hence the name BabyAGI – it represents a small step toward systems that can learn and improve themselves over time. This provides context for why Pippin is extending BabyAGI – presumably to address current technical limitations while adding new capabilities for creating more sophisticated, self-evolving AI agents.”

The Verdict

Pippin is down ~83% in the last 30 days, to a valuation closer to something which could merit a punt based on what it is today: a fun experiment led by proven technologists with real AI chops but with limited clarity on the value accrual or commercialization roadmap.

The founder has hinted that Pippin could develop into something like an AI Kaggle – a project which was acquired by Google in 2017 for an undisclosed sum – hosting competitions which require joiners to hold Pippin to compete for rewards. Yet, the monetization plan is vague at best.

In short, PIPPIN is an engaging experiment led by proven and capable AI developers where value accrual is deprioritized relative to simply seeing what can emerge organically.

At ~US$25m, it may be worth a punt as part of a broader DeAI frameworks basket but should be seen as just that: a smaller check allocated to the higher risk-reward portion of an already extremely volatile cohort based primarily on the technologists behind the project.

$OLAS: A Bet on Multi-Agent Architectures

Web3 agent frameworks are probably ~12 months behind their web2 counterparts, just now stumbling upon the tradeoffs between individual agent customization and multi-agent interoperability.

Broadly, there are two paths towards leveraging agents to solve complex problems:

  1. Provide a single agent with a greater number of capabilities or tools to accomplish more complex tasks
  2. Use a framework which constrains individual agents but prioritizes interoperability

While the future of agent economies is likely heterogeneous agent swarms, meshing highly customizable agents in a coherent system remains friction-filled.

ElizaOS is an interesting case study; clearly the leading web3 framework by developer activity – loved precisely because of its broad functionality, but, paradoxically, this very flexibility is what makes multi-agent collaboration more difficult.

Recognizing the future, ElizaOS wants to move towards interoperable swarms. Their “marketplace of trust” provides a compelling vision for heterogeneous agent & human economies built upon information markets and transparent reputation: a sort of scaled Bridgewater-esque “radical transparency” meets social credit meets prediction markets. Yet, the lift is more difficult than it would be if it had started with multi-agent collaboration from day one.

CrewAI is an example of the inverse approach within the web2 frameworks: restricting individual agent parameters in favor of better coordination between them.

Olas is making a similar bet in web3.

Source:OLAS Wiki

Signs of Life

While not eye popping, Olas is beginning to show signs of early momentum:

  • 485 daily active agents
  • with 1854 deployed by operators
  • facilitating 3.9m cumulative trxns
  • 700k trxns per month (growing 30% month on month)
  • across 9 blockchains
  • And use cases ranging from DeFi to social to prediction markets
Source: Flipside

While encouraging, the activity still lags other developer ecosystems significantly in terms of forks, contributors, and agents launched, giving pause when comparing its ~US$250m FDV vs competitors at similar valuations.

This is a reasonable assessment.

However, because it has been architected from day one to be a true multi-agent system with its two pronged stack, the shift towards swarms may prove the tailwind Olas needs to jump-start its developer flywheel.

  • Olas Protocol (on-chain): focus on multi-chain coordination, incentivization, and rewarding participation
  • Olas Stack (off-chain): Open source tools enabling devs to create agents that are modular, secure, and robust with both off chain and on-chain functionalities
Source: Olas whitepaper

The combination strikes me as necessary for a long-term success story; an elegant balance of on and off-chain components to provide both builder flexibility but also coordination. Like Arc, the heavy focus on modularity allows developers to combine code snippets, skills, and protocols to build up into more complex services.

The team is also has a strong background in game-theory, cryptography, and multi-agent systems, pulling from significant experience to avoid pit-falls and craft a systems with the potential for long-term compounding.

Tokenomics: Manifesting the Dream

Yet, the biggest question continues to be around tokenomics. Can the current design attract enough early developers to spark a flywheel between early developers, a growing library, and services value accrual?

Source: https://www.chainofthought.co/*

Without developers, there will be no useful agents. Without agents, there will be no services. Without services, no income back to the protocol. Without income, Olas’ low float high FDV token schedule will bleed out without attracting the necessary developer activity the end vision requires

So far, the primary use case has been Olas Predict, like the use case we outlined in “The Agentic Economy” which uses agents as seed liquidity for thin information markets:

“But why stop at seeding markets? In addition to seed capital, LLMs could act as the creator of markets themselves (to reduce the possibility of loopholes for cleaner market outcomes) and arbiter (governance) and potentially build new predictive models off these data sets.”

Yet, the general uptick remains limited. The developer emissions program has only had 17 unique claimers with cumulative donations reaching just US$155k, a far cry from economic equilibrium with inflation.

Pearl

Olas’ hopes Pearl can provide the catalyst. Olas recently raised US$14m to invest into Pearl, its AI agent app store targeting non-technical users with one-click and deploy agent use cases across prediction markets, investment managers, KOLs, DeFi and more.

Clearly, this is an attempt to build agent liquidity, essential to any compounding ecosystem. Yet, whether it’s enough to reach escape velocity remains to be seen.

The Developer Perspective:

“OLAS features a multi-agent system design that integrates with blockchains to share state between agents. This approach is significantly more complex than the single-agent frameworks we’ve reviewed so far in this piece.

Due to its complexity, OLAS isn’t well-suited for launching small-scale projects like Twitter bots. While this complexity may filter out low-value projects, it also likely limits developer adoption compared to simpler, more popular frameworks.

Rather than focusing on features such as RAG, user chat history, or multi-model output integrations (e.g., TG, Twitter), the framework emphasizes novel use cases where agents can achieve specific on-chain goals—such as powering AI-driven oracle networks or prediction markets. While these capabilities enable applications beyond what we currently see, they also present a much more daunting challenge for developers looking to utilize an agentic framework.”

The Verdict

I like Olas’ design. They seem poised to benefit from the shift in focus from agent novelty towards utility-based use cases most likely necessitating multi-agent swarms. In the event of adoption, the system appears designed to have a compelling potential flywheel between developer contributions, remuneration, and ever more useful services.

However, getting this flywheel of the ground is very challenging. While early metrics are heading in the right direction, at US$250m FDV, I likely would want to see further traction before allocating over competitors with similar valuations but more robust ecosystems.

If Olas can get off the ground, its ceiling may be high. But that lift will be heavy…

ZerePy ($Zerebro)

In a number of ways, ZerePy seems like a bit of a poor mans ElizaOS. While ZerePy is written in Python and has strong relative emphasis on creator outputs, the overlap is significant with a less robust developer community.

It’s current roadmap envisions a three part ecosystem:

  • The agent: Zerebro
  • The Framework: ZerePy
  • The (GUI) Launchpad: Zentients

This view has been echoed by the Labs team.

The Developer Perspective:

“Overall, ZerePy appears to be a lightweight, Python-based AI agent framework with similarities to ElizaOS. It is arguably easier to get up and running but lacks the extensibility and robust developer community that ElizaOS enjoys. While it appears to include some basic SVM/EVM integrations for on-chain actions, it hasn’t fully leveraged this functionality yet and is more adept at delivering a cookie-cutter Twitter reply bot. In comparison, it’s unclear why one would choose ZerePy over ElizaOS—it lacks both the extensive, rapidly evolving plugin library and the focused on-chain capabilities that ARC is striving for. Additionally, while the framework appears to offer basic state management, it does not support RAG or advanced memory management, features available in both ElizaOS and ARC”.

The Verdict:

Zerebro has experienced a violent sell-off from the ~US$600m range at the end of 2024, to ~US$32m today. While I suspect Zerebro will bounce from the bottom as the AI tailwinds return to the market later in 2025, its longevity is clearly in question.

Without a severe revamp, it’s unclear what ZerePy’s edge / differentiation will be to re-spark developer interest making it an uphill battle vs. similar frameworks with more robust communities like Eliza.

REI: A Bet on Cognition

While the above are “frameworks” in the traditional sense, REI aims to blur the boundaries between “body” and “mind” – ultimately hoping to compete with the model layer itself.

REI’s core argument is that existing frameworks are built for orchestrating workflows, making them useful tools in automation. Yet, they are fundamentally limited compared with cognitive architectures that can truly learn, think, and grow.

Source: Task Automation vs. Cognitive Architectures

REI aims to solve this conundrum by building a universal translator between AI systems and their blockchain counterparts, leaving each to the use cases for which they were created. The incompatibilities are stark:

  • Crypto is deterministic; every operation must be consistent across each node while AI is probabilistic, context dependent, and computationally intensive
  • Blockchains strive for fast execution / settlement while AI (particularly with reasoning) is subject to latency
Source: REI Network Presentation

REI proposes to marry this divide via an elegant solution consisting of three innovations:

    1. Split computation between AI and blockchain environments
    1. ERCData, a new standard for storing AI insights on-chain
    1. An Oracle Bridge acting as an intelligent translator between the two systems

The result is an architecture which aims to let agents learn and evolve while maintaining determinism in its blockchain interactions:

Source: REI Documentation

The Oracle bridge serves as a translator that understands context, maintains state, and ensures data integrity (vs. simple messengers that fetch and deliver data) while the ERCData enables “relationship mapping, efficient pattern storage, context preservation, hierarchical organization, and adaptive learning.”

Utility / Traction

Like Freysa, the team remains anon. However, the early stage project has garnered 3.5k community members, 100k holders, and 275m in YTD volume. The team has also announced a data partnership with Arkham, a promising sign of validation for an early stage project.

REI NETWORK X ARKHAM

We’re happy to announce @ReiNetwork0x is building on @arkham‘s intelligence infrastructure

This powers access to data and enhances insights through @unit00x0, allowing her to dive deeper into pattern recognition training

More details soon pic.twitter.com/WQFXdoHMKc

— REI Network (@ReiNetwork0x) January 25, 2025

While the framework is still closed at the moment, the team has released their REI Quant Portal v02 which aims to provide intel to inform a wide range of analyses and crypto trading strategies:

Source: @0xReitern

After testing the portal, I found it to be a bit of a mixed bag. On the one hand, the level of detail in certain on-chain queries was impressive: returning comparative volumes and protocol flows and even giving decently articulated forecasts for ETH/BTC over the next few months; on-chain analysis clearly beyond the scope of most chat-based models / agents today.

However, on other queries, I found the portal wanting – particularly on more open ended questions less about specific quantitative data that required more external context. For instance, when I asked the portal to recommend three tokens it expected to outperform over the next month, one of the three it recommended was $LIBRA, based primarily on high trading volumes. When I pointed out that this token appeared to be an insider rug affiliated a sketchy team behind several celebrity meme launches that might soon be undergoing an FBI / SEC investigation, it back-pedaled quickly.

Additionally, there were a few hallucinations regarding incorrect quoted prices or volumes. If I pointed them out, the portal would quickly recant and provide the correct figures, but still, tolerance for hallucinations in trading will be extremely low.

My overall opinion was that the project has clear potential, but is still rough around the edges – requiring meaningful polish before I would pay for or blindly follow its recommendations.

At the same time, the project feels like a glimpse into the future of where on-chain finance is heading. If REI can execute on their differentiated roadmap, then I could see future iterations being very useful.

Unfortunately, REI is limited to the closed portal experience today, so we were unable to have Labs dive into its nuts and bolts, but REI plans to grant access to the SDK/API in the near future.

The Verdict

REI markets itself as a base layer infrastructure platform: the “infrastructure’s infrastructure” providing foundational API services that enable AI agents to interact and operate. It blurs the edges between framework and model:

Source: @0xKyle_

While I agree the “moats” for a core architecture are likely higher than a framework, the competition is clearly intense and provokes an interesting question:

For on-chain related use cases, should “body and mind” be fused for faster, evolving iterations or is a modular approach more effective: frameworks providing connectivity and tooling for ever more powerful models to plug into on the backend?

Given its stage, I view REI as early venture risk with public market liquidity; the distribution of outcomes is very wide. Fortunately, at ~US$25m FDV, the price is fairly reasonable for the level of risk, likely to be a beneficiary of the enthusiasm I expect to return to the DeAI sector in 2025, but with a long way to go.

MyShell

Last but not least, MyShell dropped a major update just hours ahead of publishing this report. We have covered MyShell previously, so will keep this short.

MyShell is a decentralized AI consumer layer, connecting consumers, AI agent creators, and open-source researchers. As part of this vision, MyShell has released an open source AI Framework/SDK:

Source: MyShell in a Nutshell

While part of a broader ecosystem, clearly MyShell is entering the race – particularly competitive with no-code / low-code frameworks aiming to expand the developer / creator TAM with its widget libraries and drag and drop interface.

Today’s update is a clear shot in the arm for its modular framework ShellAgent, focused on drawing more builders into the ecosystem, accelerating contributions and shipping integrations which have proven popular in other open frameworks:

  1. ShellAgent Protocol: fully integrates Pro Config widgets into ShellAgent, enabling developers to create AI agents without coding. It also unlocks write access to the ShellAgent Protocol, allowing developers to contribute new widgets and expand the ecosystem
  2. IM Integration: supports AI agent deployment on major social media platforms like X and Discord, enabling workflows for social and on-chain interactions
  3. On-chain Intelligence: New blockchain-related widgets that allow agents to interact with blockchain data, execute smart contracts, and manage wallets autonomously paving the way for DeFAI oriented use cases

Overall, ShellAgent’s Github star count is still only at 64, but that has accelerated in the past month and will likely continue if the team keeps shipping. MyShell also released its TGE this week, trading at a ~US$500m FDV, up almost 2x since launch – one of few token launches over the last 12 months to have been warmly received.

The agent framework race will not be circumscribed only to “pure-plays”, but will include projects like MyShell who provide their own framework as part of a more holistic ecosystem.

We will need to see how the update impacts builder adoption, but MyShell has thrown its hat in the ring and is clearly one to watch.

The Frameworks are Dead. Long Live the Frameworks.

Holding two conflicting thoughts in one’s mind simultaneously is basically impossible for crypto twitter. Yet, both of the below can be true:

  1. Moats for Agent Frameworks are non-obvious, leading to tenuous longer-term value capture as the cost of replicating these ecosystems diminishes with the plummeting cost of software development
  2. The tailwinds behind agent use cases are so large that we are likely to see frameworks rebound in the near-to-medium term as agents shift from novelties to useful economic actors: frameworks + launchpads being one of the easiest vehicles for broad-based exposure to the expanding sector

I continue to believe the AI application bubble has only just begun. RL, distillation, inference time compute: the cat is out of the bag. Even if models were not scaling along four vectors (which they are) and the capabilities stall out (which they won’t), we have enough fire power to transform entire industries as developers and organizations figure out how to harness these new capabilities.

Like fire from heaven, these digital minds are infinitely replicable. Like electricity, they will underpin a second digital and industrial revolution. Edison’s first incandescent bulb was completed in 1879 and his first electric power station in NYC in 1882. Yet, the broader industry transformation which arose from Edison’s work took decades.

The AI revolution, built on internet rails and compounding at the speed of open source software, will happen much faster. We have seen remarkable model releases monthly which can increasingly be piped directly into the environments they are intended to transform with seamless access to the tools they need to be effective. Every interaction or inference or chain of thought simulation becomes training data for continued improvement.

Humans are the biggest bottleneck.

And as employers realize this, the shape of enterprises will shift, replacing software and services with synthetic intelligence. Today, web3 agents are still largely a novelty. Web2 agents will command the majority of the profit pool from the coming enterprise transformation. However, that very transformation will push more economic activity into spheres where web3 frameworks and “sovereign agents” are well positioned.

We are on the cusp of transformation. The light tremors before an earthquake. It’s tangible on the timeline, in the TG chats, in coffee shops from San Francisco to Shenzhen. The foundational social contract of modern society (trade your honest days labor in exchange for a decent wage to provide for your family) is breaking down, and we are just two years into the AI revolution. A revolution which will only accelerate. Populist politicians are on the rise at a rate which echoes the 1930s. Industrial era institutions are in disarray; public trust polling at record lows. The U.S. is retreating from its role as global cop. The post-cold war hegemony is giving way to multi-polarity.

This is what a fourth turning feels like.

This is a background in which new institutions can rise. Will inevitably rise.

I continue to believe that on-chain payments, internet capital markets, and software based governance will have roles to play in this rapidly evolving future. If anything, because it’s increasingly where outcasts congregate: spat out from the automation juggernaut into the chaotic, haphazard, and generally weird corners of cyberspace where the internet is still fun. Where permissionless capital markets meets memetics meets infinite back rooms in a cocktail that is at once 80% scammy wasteland, 20% genuine peak into the future.

My bet is that the use cases for which Web3 has found product market fit – entertainment, influencers, memes, gaming, trading, gambling – will only grow as a percentage of economic activity. That the ~70% of GDP congregating in services will stream consistently into the waiting arms of the attention economy. That the corresponding embrace of crypto rails for payments and capital markets will allow for new value chains to form around compute, data, and ultimately synthetic intelligence to accelerate the unbundling of the enterprise, Coase’s rationale for the firm evaporating into Big Tech’s super clusters and the open source mesh of data, compute, and talent.

This is the backdrop I see for web3 agents and the frameworks which aim to provide the tooling to build and launch them. As with any permissionless innovation tied to financial incentives, there will be scams, grifts, and an enraging numbers of me-too slop bots.

Incrementally, the lack of sound fundamentals mean DeAI will continue to be subject to the whims of macro. That whatever Donald Trump and Scott Bessent’s “plan” is for a “Plaza Accord 2.0” will involve significant near-term pain as Trump bullies trading partners into submission with his Tariff club for maximum leverage ahead of any “New Bretton Woods” to reel in the surging dollar.

The unfortunate reality is that we have US$10 trillion in treasuries in need of refinancing in 2025 and a President hellbent on sending yields higher. We are not sure how long the game of chicken will last, but the entire crypto market is still a flee on the treasury markets’ elephant-sized ass.

In short, timing is still very, very macro dependent.

And yet, the tailwinds behind agent frameworks appear relentless. Despite the monster sell off in Q1, I continue to believe these are some of the best vehicles to speculate on a future in which open source AI moves faster than the market currently realizes: powered by a golden era of ubiquitous, capable, and cheap “digital minds”.

Based on adoption to date, $ai16z, $Virtual, and $ARC are likely the best positioned to capture these tailwinds whenever the reversal occurs. ai16z due to its developer ecosystem. Virtuals do to its launchpad and GTM execution. And ARC due to its differentiated approach to high-performance agents. The selloff would seem a compelling chance to accumulate for longer-term holders.

However, the space moves incredibly quickly with the others only one pull-request or killer app away from supplanting the leaders. Bags should not be married.

Given the sector’s immaturity and size, volatility is assured alongside a graveyard of also-rans. Yet, like crypto markets more broadly, the volatility should oscillate along a trendline that points up and to the right.

The Frameworks are Dead. Long Live the Frameworks.

Shout out to my co-authors Dane and @dohko_01 for diving into the trenches for the more nuanced technical perspective on each platform.

*I also appreciate the feedback from Twill, Ceteris, Luke, and Jose as well and insights from Don and Johnson.

PD signing off.*