19: Personal Superintelligence
Hey, it’s Marc.
OpenAI is now pulling in $12B a year, double what it made just seven months ago. With 700M weekly users and major investors lining up, OpenAI is aggressively building for enterprises and consumers. Its projected 2025 cash burn is now $8B, up from $7B, and it’s raising another $40B, with SoftBank’s total exposure alone at $32B.
Meanwhile, Meta is building a personal superintelligence that augments individual agency, not a godlike AGI. Zuckerberg: “Developing superintelligence — AI that surpasses human intelligence in every way — is now in sight.”
Top Agent signals this week:
OpenAI launches Stargate Norway, its first AI data center initiative in Europe
Alibaba previewed its first AI-powered smart glasses, Quark AI Glasses
Similarweb launches GenAI Intelligence Toolkit
Luma AI introduces “Modify with Instructions”, which lets you edit video with simple text prompts
OpenAI introduces study mode, an interactive learning AI tool
👉 Get your brand in front of 30,000+ decision-makers — book your ad spot now.
Top Boardroom Reads This Week
Superintelligence (Meta)
2025 AI Index Report (Stanford)
AlphaEarth Foundations helps map our planet in unprecedented detail (Google)
The AI Bubble Is Hiding the Real Revolution (The Great Restructuring)
How to price your AI agent: A framework for smart monetisation (Orb)
AI Reporting: Automated Analytics for 2025 (Improvado)
Power Moves
Google Launches Opal
Google Labs released Opal, a no-code AI mini-app builder that lets users chain prompts, models, and tools into functional apps, using natural language and visual editing. It’s now in public beta (US only). [ANNOUNCEMENT]
So what? Google is positioning Opal as a platform for AI-native software creation—competing with Replit’s AI IDEs, OpenAI’s GPTs, and Meta’s open agents. Think Zapier for LLMs, integrated with Google. Users can build and share tools via drag-and-drop or chat, streamlining model calls and workflows in minutes. It's powering a new wave of creators: prompt engineers, knowledge workers, and solo builders launching internal tools and niche SaaS MVPs.
Manus launches Parallel Agent research engine
Manus just launched “Wide Research,” scaling agent-based tasks 100x via cloud VMs. After redefining AI agents with a personal cloud model, Manus now lets Pro users orchestrate hundreds of sub-agents in parallel. Each subagent runs a full VM. The result: deep dives on Fortune 500s, MBA programs, or competitive intel in seconds. [ANNOUNCEMENT]
So what? Manus is offering distributed cognition-as-a-service, meaning enterprises can now offload high-volume research, analysis, and decision prep to parallelised agents. It’s the start of AI-native workflows at operational scale.
Google Earth AI builds a virtual planet
Google launched Earth AI and AlphaEarth Foundations — geospatial AI models that map the planet in near real-time using satellite, radar, and climate data. With hyper-detailed 10x10m resolution and 64-dimensional embeddings stored 16x more efficiently, the system processes 1.4T data points annually. Already in use by the UN and Stanford, it powers climate risk, deforestation, and agriculture analysis at an unprecedented scale. [BLOG 1] [BLOG 2]
So what? This is the beginning of a planetary-scale intelligence infrastructure. For enterprises in insurance, logistics, infra, energy, or agtech, Earth AI is a foundation layer, not just data set.
Vendor Spotlight: Runway
Runway just dropped Aleph—a game-changing tool that lets anyone edit, transform, or generate video with a simple prompt. [Read more]
Think of it like ChatGPT, but instead of writing text, it’s rewriting reality… on video.
How can it be used?
Edit anything: Want to remove clutter from a shot? Swap a Prius for a horse-drawn chariot? Say it, and Aleph does it—frame by frame, shadow and all.
Get new angles, instantly: Didn’t shoot a reverse shot? Want a bird’s-eye view? Aleph generates it from your existing footage.
Total control over style + setting: Change the time of day, apply a whole new aesthetic, make it rain (literally). No re-shoots. No VFX team. Only AI.
From raw idea → polished shot: Add a character. Change their clothes. Make them older. Younger. Animated. Or cinematic. This is Photoshop for time.
Aleph is a time machine for video production. It collapses what used to take crews, cameras, budgets, and replaces it with a single, smart interface. And it’s not just for creators on TikTok. Studios like Lionsgate are already building custom models on top of Runway to speed up pre- and post-production.
What this unlocks:
Creators can make studio-grade content with indie budgets
Brands can A/B test visuals in hours, not weeks
Agencies can repurpose footage endlessly
Filmmakers can visualise scenes on the fly
Aleph can be very interesting for marketers and brand managers to save time and effort on shooting and instead convert the idea into a video using Aleph.
Competitor: Luma AI
More reads:
👉 We’re tracking 1000s of AI and blockchain vendors so you find the right vendors, partners, and bet on the right tech. Sign up for early access.
More from 51:
That’s all for today.
Thanks,
Marc & Team




