Feed

Join the Hivemind

[
]
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
See catalogue
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
by Pondering Durian
20.08.2025

Really enjoyed this interview by Dwarkesh Patel with Casey Handmer on the future of energy. Casey makes a compelling case for solar as the torch-bearer for the coming energy transition.

For the first time in my life, I started contemplating buying rural desert land in Nevada and West Texas.

Highlights:

  • China currently has 20x more annual solar manufacturing capacity than the US, which could give it a significant edge in the AI race due to massive energy needs, but the US can catch up within five years through automation, cheap natural gas, and financial advantages. (PD: Seems like a very aggressive timeline and unlikely)
  • China's energy security is vulnerable due to reliance on imported oil and geopolitical risks, while the US benefits from isolation and stable neighbors.
  • AI will drive hundreds of gigawatts of new energy demand; solar is positioned as the scalable solution, potentially powering off-grid data centers with batteries by 2027, as natural gas turbines face supply limits.
  • Solar costs are dropping rapidly (43% reduction per production doubling), making it cheaper than running existing coal plants; by 2040, new data centers could be 100% solar-powered.
  • US environmental regulations (e.g., NEPA) slow solar deployment—Texas installs 10x more than California due to lighter rules—highlighting regulatory reform as key to competing.
  • Large-scale solar for 5 GW data centers requires ~50,000 acres, feasible in places like Texas, but permitting and non-contiguous land use are challenges.
  • Batteries enable temporal arbitrage, storing solar for peak times and reducing grid dependency, with per capita battery capacity surging (e.g., from 10g to 100kg via EVs).
  • As batteries proliferate, grid utilization will decline, electron travel distances will shorten, and mobile battery markets could emerge for flexibility.
  • GDP underestimates AI's impact due to deflation; energy consumption (e.g., solar-powered AI) is a better metric, as AGI could automate $60T in global labor value.

This is some text inside of a div block.
by Pondering Durian
11.08.2025

As an (at least aspiring) high-end content producer on the interwebs, I found this episode of No Priors with Cloudflare CEO Matthew Prince to be a welcome shot of optimism in a sea of AI doomerism around content production.

The gist of the episode is that the historical social contract of the web is breaking down. Content-producers used to produce content, and Google would route them traffic which could then be monetized by ads or merch or just plain clout. Increasingly though, AI models mine the content and provide derivative answers to users without routing any traffic back to users or even attribution.

Obviously, this new status quo significantly reduces the incentives to provide new content, something which, in the long term, will also impact model performance.

Cloudflare's unique position may enable it to enforce a new status quo: one which provides more granular permissions to content creators around who can view their content and at what price (e.g. humans view for free, but bots have to pay).

This identity layer is something the crypto world has spent a lot of time thinking about with networks like World being the most notable examples, but perhaps Cloudflare's existing position at the network's edge puts it in a very unique position to enforce a new social contract on today's rails: one which creates a market for truly novel human knowledge creation online.

The old world of Google indexing the internet and routing traffic will die a slow death. We have a chance to rethink monetization online - one in which markets are likely to be a better solution for content creators as opposed to regulatory protection (unlikely to hold up under fair use).

In an optimistic scenario, this will be a market that stops rewarding dopamine and outrage and starts rewarding truly novel forms of human knowledge which, in turn, fill in the gaps of foundational models.

Like Kevin Lu's latest essay, I wonder if Cloudflare is in a position to enforce a globally, distributed version of scale AI / surge AI. Perhaps the future of the web is internet scale markets for truly novel human knowledge; something I think we can all get behind.  

This is some text inside of a div block.
by Lex
10.08.2025

The release of GPT5 sparked significant controversy, leading Sam Altman to promise the return of GPT4o during Friday's Reddit AMA (ask me anything)—a rather embarrassing situation.

While GPT5's performance and the drastic downgrade of previous models represents the primary controversy, I'm more interested in the less-discussed aspects of OpenAI's most significant release since 2023, particularly how it reflects a shift in the company's strategic focus.

First, on the developer's end, the enhancements are very agent-friendly:

  1. Tools: custom tools now support any text input (code, SQL, shell, configuration, etc.) beyond just JSON; Allowed tools mechanism lets you dynamically specify which subset of tools can be used in each round; Parallel tool invocation enables the model to use multiple tools simultaneously, greatly improving efficiency for complex tasks and multi-tasking; Preamble feature automatically generates concise explanations before tool use, improving transparency and debugging.
  2. Structured Output: Support for CFG (context-free grammar) allows developers to use lark/regex to precisely constrain output formats. Combined with strict mode, JSON output success rates have improved from 40% to over 90%. The model now strictly adheres to parameter formats, making outputs more predictable and safer.
  3. Performance/Latency/Cost Control: New "Reasoning Effort" settings (minimal/low/medium/high) let developers fine-tune the depth of model "thinking"—minimal for ultra-low latency needs, high for complex reasoning tasks. Output verbosity settings (low/medium/high) control detail levels; for code, low generates refined code while high provides detailed explanations with structured formatting.
  4. Context: Chain-of-Thought cross-round delivery enables reasoning chains to persist across conversation rounds, giving developers fine-grained control over context management and multi-step tasks.
  5. Pricing: Token caching support for high-frequency scenarios, dramatically lower API pricing (now aligned with Gemini and approaching 1/10th of Claude's rates).

These API enhancements focus on being "agent-friendly" and "developer-controllable." Experienced model API developers will recognize that while these features aren't directly related model intelligence, they address significant headaches in day-to-day engineering applications. When model intelligence itself isn't the bottleneck, these practical features often become decisive factors in choosing which AI system to use.

Second is the ecosystem. Few may have noticed that Cursor released CLI mode on the second day after GPT5's launch, quickly integrating with GPT5 in response to Claude Code's impact. Frequent users recognize that Claude Code's true killer feature is its $200 max subscription that provides thousands of dollars worth of tokens. Now that Anthropic can no longer afford such subsidies and has begun limiting usage, the affordable GPT5 API paired with Cursor CLI offers an immediate counterattack just as the competition pauses to regroup. Additionally, the open-weight version released recently features strategically selected parameter sizes—120B for high-performance workstations and 20B for consumer PCs and even MacBooks. Its optimization goals and marketing clearly target the recent surge in China's open-source models.

Finally, the most radical change to the consumer experience is ChatGPT's completely revamped user interface. GPT5 represents more than just a model—it's a UNIFIED system that combines a fast model, a thinking model, and real-time routing between them. OpenAI has eliminated all other models, offering only GPT5. This simplification actually benefits the "silent majority" of users who were previously confused by multiple model options and likely never understood which one to choose—much like Word users who only know how to copy/paste using right-clicks. The backlash against these changes stems primarily from GPT5's underwhelming performance—many users believe it's inferior to GPT4o—rather than from the interface overhaul itself. The system also features four preset answer personalities, enhanced memory capabilities, better third-party integration (Gmail/calendar), improved healthcare answer accuracy, significantly reduced hallucinations, and stronger emphasis on secure output. While power users skilled with prompts might find these features trivial, they're genuinely transformative for novices. Additionally, for the first time, the flagship model was made immediately available to free users, andcharging U.S. government employees just $1 serves as both a political statement and a strategic move to capture institutional mindshare.

Analyzing the current competitive landscape helps us understand OpenAI's actions. As AI models have converged on the general LLM + RL formula, capturing users' mindshare has become the key competitive advantage. This is crucial not only for direct revenue but also for the second phase of modeling training —learning in real environments

On one hand, OpenAI leads in consumer products with 700 million weekly users (85% international). METR's report shows their user stickiness (DAU/MAU) exceeds 20%, while no close competitor surpasses 10% (excluding China's DeepSeek and MS Copilot, which also uses OpenAI's model). They've doubled annualized revenue to $12B with 90% penetration among Global 500 companies.

On the other hand, Anthropic's relative growth rate is significantly faster—growing more than 10x for two consecutive years and quadrupling in the first half of 2025 to reach $4B annually. An estimated 60-70% comes from API usage (despite having the industry's most expensive API pricing). Claude Code has reached nearly $400 million in annualized revenue within months of launch. Seeing developers gravitating toward Anthropic must concern OpenAI, as historical experience shows that platform-type products always win by attracting creators.

Meanwhile, Google remains a formidable competitor with modeling technology that potentially surpasses GPT. Their innovations span LLM, video, world models, and AI4S. Though slower in product development, Google's robust platforms—Search, Chrome, Android, YouTube—provide substantial resources to sustain a prolonged competitive battle. Additional threats include Musk-backed Grok from x.com, a regrouped Meta, and Chinese players fighting for ecological and enterprise developers with open-weight models.

OpenAI aims to establish itself as both the dominant global consumer AI portal and preferred assistant while maintaining its strong position with agent developers and enterprise markets. Their strategy is well-executed; however, ironically, what holds them back this time is their traditional strength—modeling capability...

This is some text inside of a div block.
Link Copied