AI Agents, Biosecurity, and the Non-Human Economy
The November 20, 2025 episode of Moonshots with Peter Diamandis features Salim Ismail, Dave Blundin, and Alexander Wissner-Gross examining Google’s Gemini 3 as a shift from narrow tools to integrated AI agents.
Briefing Notes contain: (1) a summary of podcast content; (2) potential information gaps; and (3) some speculative views on wider implications for Bitcoin. Most summaries are for Bitcoin-centered YouTube episodes but I also do some on AI and technological advance that spill over to affect Bitcoin.
Summary
The November 20, 2025 episode of Moonshots with Peter Diamandis features Salim Ismail, Dave Blundin, and Alexander Wissner-Gross examining Google’s Gemini 3 as a shift from narrow tools to integrated AI agents. They explore benchmark performance, voice interfaces, agentic shopping, AI-native software development, and emerging “physical AI” platforms for engineering and manufacturing. The conversation highlights biosecurity risks, defensive co-scaling strategies, and governance challenges that will shape how AI systems interact with real-world economies and digital infrastructures, some with important implications for Bitcoin.
Take-Home Messages
- AI agents as market participants: Benchmark results such as Andon Labs’ vending machine economy suggest that Gemini 3-level systems can already operate as profitable, semi-autonomous businesses, foreshadowing a rapidly expanding non-human economy.
- From assistants to integrated operators: Deep integration of Gemini 3 across Google services points toward agents that manage email, calendars, search, and transactions in a persistent mode, raising questions about oversight and user control.
- Voice, translation, and shopping as early deployment fronts: Upgraded voice interaction, live translation, and agentic shopping that calls stores and checks inventory show how AI is becoming a ubiquitous interface for communication and commerce.
- Biosecurity as a binding constraint on openness: The panel frames engineered pathogens as the sharpest downside risk of advanced models, emphasizing the need for defensive co-scaling, biosensing networks, and careful decisions about open versus closed releases.
- Physical AI and industrial restructuring: Emerging systems like Prometheus point to AI-guided design and manufacturing that could reshape supply chains, capital allocation, and competitive dynamics across engineering-heavy industries.
Overview
The roundtable centers on Gemini 3 as a watershed moment where users begin to “talk software into existence,” replacing much of the traditional code-writing workflow. Ismail emphasizes that the model’s deep integration across Google’s ecosystem allows a single agent to coordinate email, calendars, search, and productivity tools with minimal friction. Blundin adds that this persistent agent mode blurs the boundary between assistant and operator, positioning AI as an always-on layer in everyday digital life.
Benchmark performance is presented as evidence that these systems are already operating at, or above, many human capabilities in structured tasks. Wissner-Gross highlights results on Humanity’s Last Exam, ARC-AGI2, and especially Andon Labs’ vending machine “vending bench,” where Gemini 3 reportedly achieved nearly 3,000% higher profits than rival models in a simulated mini-economy. Blundin uses this to argue that AI agents can function as self-directed economic actors, making zero-employee firms and non-human businesses a realistic near-term prospect.
The panel then turns to user-facing interfaces, with Gemini Live’s natural voice interaction and real-time translation framed as a major qualitative leap over earlier stilted systems. Ismail and Blundin suggest that audio-to-audio conversations, persistent context, and live translation make AI a continuous companion during driving, travel, and work, with direct implications for language-learning and call-center industries. They see these capabilities as early signs of AI mediating cross-border communication and compressing the cost of expert-level assistance.
From interfaces, the discussion moves into AI agents that call stores, check inventory, negotiate purchases, and coordinate logistics, building on Google’s prior Duplex experiments. Wissner-Gross describes this as the beginning of a systematic indexing of the physical world, where businesses become machine-readable endpoints in a dense network of AI-to-AI interactions. The conversation concludes by exploring biosecurity and “physical AI,” including Red Queen Bio’s focus on defensive co-scaling against engineered pathogens and Prometheus’s attempt to apply world models to engineering, manufacturing, and space, signaling that AI influence is rapidly extending from digital services into material production.
Stakeholder Perspectives
- AI platform and cloud providers: Seeking to turn multimodal, agentic models into dominant operating layers across consumer, enterprise, and industrial domains while managing safety and regulatory scrutiny.
- Regulators and policymakers: Confronting the need to classify and oversee non-human economic actors, set guardrails for biosecurity, and balance openness, competition, and systemic risk.
- Biosecurity and public health institutions: Prioritizing defenses against AI-enabled engineered pathogens through defensive co-scaling, environmental biosensing, and governance mechanisms for dangerous capabilities.
- Software engineers and technical workers: Adapting to AI-native tools that automate large parts of coding, altering training pathways, productivity expectations, and the relative value of different experience “vintages.”
- Retailers, manufacturers, and small businesses: Facing intensified competition and new opportunities as AI agents autonomously shop, negotiate, and redesign production processes, potentially favoring firms that quickly adapt their interfaces and workflows.
Implications and Future Outlook
The emergence of AI economic agents capable of running profitable micro-businesses points toward a future in which non-human entities play a visible role in markets. As models like Gemini 3 become more accessible and deeply integrated into consumer and enterprise platforms, regulators will need to decide how to classify, tax, and audit these agents. Without clear frameworks, responsibility for misbehavior, market manipulation, or financial crime carried out by AI agents could become difficult to assign, increasing legal and systemic uncertainty.
Biosecurity concerns introduce a powerful constraint on how frontier models are developed and deployed. The panel’s emphasis on defensive co-scaling, environmental biosensing, and query-level filtering implies that societies may accept more pervasive monitoring in exchange for reduced catastrophic risk. Decisions about openness, local model controls, and international coordination will determine whether safety measures keep pace with capabilities or whether a “race to the bottom” in access and oversight emerges.
Physical AI efforts like Prometheus suggest that AI will increasingly shape factories, infrastructure, and space-based industries, not just digital services. If large players use world models to optimize design and production across borders, smaller firms and jurisdictions may struggle to keep up without pooled infrastructure or shared standards. Over the next decade, choices about interoperability, data sharing, and governance will influence whether physical AI reinforces concentrated industrial structures or broadens access to advanced manufacturing capabilities.
Some Key Information Gaps
- How will AI economic agents like those tested in vending machine benchmarks reshape firm structures, including the viability and regulation of zero-employee companies? Clarifying these dynamics is essential for designing corporate, tax, and liability frameworks that can accommodate non-human market participants.
- How can AI systems be constrained to prevent design assistance for engineered pathogens while still enabling beneficial bioengineering and medical research? Identifying workable technical and institutional safeguards will be central to harnessing AI’s upside in biology without amplifying catastrophic downside risks.
- Which quantitative scaling laws best capture “defensive co-scaling” of safety measures with AI capability, and how can these inform funding and policy targets? Robust metrics would give regulators and labs concrete thresholds for acceptable deployment and clear triggers for additional investment in defenses.
- How will agentic shopping systems that autonomously call stores and compare offers transform retail competition, local commerce, and consumer protection rules? Understanding these effects will help policymakers and businesses update antitrust, disclosure, and customer-rights frameworks for AI-mediated markets.
- Which indicators—such as cost-of-intelligence trends, housing costs, health outcomes, and depression rates—most reliably signal that AI-driven abundance is actually diffusing to the majority? Developing a concise indicator set would support horizon scanning and policy evaluation as societies attempt to track whether technological gains translate into broad-based welfare improvements.
Broader Implications for Bitcoin
AI Economic Agents and Bitcoin-Denominated Activity
As AI agents become capable of running firms and managing portfolios, it is plausible that some will transact, save, or hedge in Bitcoin alongside fiat currencies. Over the next 3–5 years, AI-run micro-businesses and automated treasuries could treat Bitcoin as a censorship-resistant reserve asset or settlement rail, especially in jurisdictions with capital controls or unstable banking systems. This shift would deepen the link between AI automation and Bitcoin liquidity, raising new questions about how non-human entities influence price discovery, fee markets, and on-chain transaction patterns.
AI-Driven Surveillance and Financial Privacy
The biosecurity-oriented push for pervasive sensing, content scanning, and anomaly detection creates precedents that can spill into financial surveillance. If societies normalize AI systems that continuously inspect communications and environmental signals for threats, similar tooling could be applied to transaction flows and Bitcoin addresses under the banner of anti-terrorism or sanctions enforcement. Over the medium term, this dynamic will intensify debates over privacy-preserving techniques such as CoinJoin, layered custodial models, and the role of open-source wallets in resisting or accommodating AI-augmented monitoring.
Physical AI, Energy Systems, and Bitcoin Mining
Physical AI platforms that optimize factories, grids, and logistics will also shape the energy landscapes in which Bitcoin mining operates. Advanced world models could identify underutilized energy sources, grid bottlenecks, or co-location opportunities where data centers and miners can stabilize demand, lowering costs for both AI inference and hash rate. In a 3–5+ year horizon, jurisdictions that align physical AI planning with Bitcoin mining strategies may gain an edge in monetizing stranded resources and attracting capital for shared energy and compute infrastructure.
AI-Native Software Development and Bitcoin Infrastructure
As AI-native coding environments become standard, much of the software stack that supports exchanges, wallets, and node implementations will be at least partially written or refactored by AI. This offers potential gains in auditability, fuzz testing, and rapid patching, but it also introduces subtle risks if models learn insecure patterns or if supply-chain attacks target AI-generated code. Bitcoin ecosystem maintainers will need to develop clear guidelines on how AI tools are used in critical-path infrastructure, combining human review with automated verification to preserve robustness while leveraging productivity gains.
Comments ()