Visualize Ocean Intelligence
A speculative history of the AI-enabled future of marine Carbon Dioxide Removal
The year is 2038.
You’re looking at a screen showing a patch of ocean 170 kilometers northeast of the Seychelles.
It’s the subtropical Indian Ocean gyre, some of the most nutrient-poor waters anywhere. Satellite data shows chlorophyll concentrations barely above detection limits — nothing’s growing there.
Your job is to operate a prescriptive ocean AI trained on reams of ocean data called Ocean Intelligence, OI to friend. The system has just identified a nearby patch with low iron, low fixed nitrogen, and fertilization assets close enough to be deployed in time. It flags the patch as the optimal deployment opportunity for carbon drawdown.
OI pulls together satellite observations, historical measurements from research cruises that have passed through this region to generate a set probabilistic projections about what happens if you intervene. It runs a full physics-based simulation of the next two weeks of ocean circulation in this exact patch, generating an ensemble of scenarios, each with a likelihood estimate. A mesoscale eddy is likely to develop, which could contain any bloom to a manageable area. Iron added now will hit a sweet spot—enough mixing to distribute nutrients, not so much that the bloom gets dispersed before carbon export begins.
The OI packages all of this into a recommendation. It specifies the intervention: three autonomous surface vehicles should deploy 12 tons of iron across 400 square kilometers in a specific geometric pattern.
It lays out the monitoring strategy: two underwater gliders positioned to track bloom development, a fixed mooring to measure CO2 drawdown, satellite overpasses coordinated with profiling floats to measure particle flux. It estimates 200,000 tons of CO2 sequestered with an uncertainty range of 150,000-280,000 tons. It flags potential risks: a 15% chance the eddy doesn’t develop as predicted, a 5% chance of unusual oxygen depletion based on similar historical cases.
But that’s just the oceanography. Next it has to deal with the bureaucracy.
This patch of ocean sits in the Seychelles’ Exclusive Economic Zone. You can’t just put iron in someone else’s waters because some computer says it’s a good idea. The OI generates an Environmental Impact Assessment formatted to the regulatory standards required by the Seychelles. It documents baseline conditions, projects ecological changes, quantifies likely impacts on fisheries and marine ecosystems. It shows that iron concentrations will remain below natural variability from dust deposition, that the intervention is time-limited and reversible, that monitoring will detect any adverse effects early. It compiles the whole package —data, methodology, risk assessment, monitoring protocols— and submits it to the Ministry of Fisheries. Then it waits for regulatory approval, just like any other marine research project.
But the OI is also about thinking economically. It pulls current carbon credit prices from compliance and voluntary markets, calculates the costs of deployment and monitoring, and builds a profit-and-loss model for this specific intervention. It estimates return on investment, but crucially, it doesn’t just show a single number. It shows the full distribution of possible outcomes—best case, worst case, most likely case—all couched in terms of quantified uncertainty.
Then it waits.
A team of oceanographers reviews the recommendation. They work for a non-profit run by scientists to operate the OI platform. They drill down into the physics model outputs, check the uncertainty bounds, examine the historical analogs the model used. For sure, they have questions: Why this specific iron dosage? What happens if the eddy arrives six hours early?
The Ocean Intelligence model is not itself a full physics-based ocean model, but it can always invoke one in high stakes situations. In this case, the OI runs several high-resolution physics-based simulations to answer the human team’s questions and presents its conclusions. After human review and regulatory approval, the oceanographers authorize the deployment.
The three wave gliders converge on the target coordinates and begin their runs, so monitoring begins immediately. Gliders descend and surface, transmitting data. The mooring records CO2 changes. The OI checks them against satellite data monitoring chlorophyl and temperature and processes everything in near-real-time, continuously updating its predictions against observations.
Four days in, chlorophyll concentrations begin to rise. The bloom is developing exactly as predicted. But on day six, something unexpected happens: dissolved oxygen levels start dropping faster than the model had anticipated. The OI flags it immediately, runs diagnostics, determines it’s likely due to more intense bacterial respiration than the initial model suggested. It checks for harmful algal bloom species, for unusual changes in zooplankton communities, for any sign this is heading somewhere problematic.
The OI sends an alert to the oversight team, adding the regulators in the Seychelles in CC: “Oxygen anomaly detected, likely within normal bloom dynamics but 20% more intense than predicted. Recommend continued monitoring, no intervention suspension required.”
It runs the full physics-based model a dozen times to ensure its expectations are met. It shows its work—the results of the full physics-based model runs, its diagnostics, and comparison to historical blooms. The oceanographers and regulators concur. The operation continues. The OI adjusts itself in light of what it has learned from this experience.
By day fourteen, the bloom begins to dissipate. The profiling floats show particulate organic carbon sinking past 200 meters, past 500 meters. The OI then calculates final results: 220,000 tons of CO2 sequestered for at least one century, verified by multiple independent field measurements, with a full counterfactual baseline, and backed also by several runs of the full physics model.
The OI packages everything into a verification report—raw data, model outputs, uncertainty quantification, counterfactual analysis—and hands it off to an independent verification team. They have their own physics-grounded AI, trained independently and using different methods to cross-check the results. The verification AI reviews the monitoring data, runs its own physics models, and confirms the carbon accounting. If the numbers match within acceptable uncertainty bounds, carbon credits get issued.
The credits hit the market, buyers pay up and apply them to the offsets to their own nationally mandated caps. Revenue flows back to fund the next deployment, with royalties also flowing to the treasury of the Seychelles. It’s their ocean, after all.
At each step, the OI learns from the experience: bacterial respiration parameters for this region need adjustment. The next recommendation will be more accurate.
That’s the vision, anyway. Whether it becomes reality depends on choices we make now.
To be clear, all the tech to build what I’ve just described exists today — it just hasn’t been integrated into a single tool. Using 2026 technology, building such tool would be complex but doable, if properly resourced. And AI technology is advancing enormously fast. Every year the barriers to building a tool like this will become less daunting. As the frontier pushes towards artificial general intelligence, tools like this that scan as futuristic today may come to seem almost humdrum.
I think when we look back, 25 years from now, we’ll wonder that anyone ever thought it would be possible to get on top of climate change without an AI-based prescriptive ocean model like this.


I’m curious about the worst case scenario? How bad can it be?
And you mentioned something about it being reversible, how easy is that?
The AI that's advancing most rapidly these days are LLMs.
Our ability to simulate physics and climate models is quite a different technology.
I'm skeptical we'll be at the stage to model such ambitious projects by 2038.
Of course, more data from trials will help enormously so doing smaller scale modelling exercises that can be verified by real outcomes is critical