Public version. Replication-safe by design.
Note: The figures in this document are modeled scenario outputs under stated growth assumptions. Actual results depend on execution, market conditions, and deployment timeline. Certain architectural details, dispatch parameters, and fabrication specifications are withheld from this public version by design. Those belong in an internal annex, not a public disclosure. 1. SUMMARY: WHY WE BUILD THE COMPUTE VERTICAL We build the Compute vertical for the same reason we built the hydro Core, the steel vertical, the concrete vertical, and the wafer vertical. Because compute has become a utility, and like every other utility, it is being captured. Compute is now required for national planning, AI training, climate and water modelling, industrial process control, drug discovery, logistics optimization, and civil administration. It is not a luxury-tier product anymore. It is the decision substrate of modern states and modern industry. And yet the world has allowed the compute substrate to consolidate into a small handful of gatekeepers, running on constrained grids, priced through opaque stacks, and governed by incentives that do not align with sovereignty, truthfulness, or public welfare. At the same time, there is a physical hard stop approaching. The world is racing to build AI-era data centers, and the limiting reagent is no longer capital, land, or even chips. The limiting reagent is power delivery and heat rejection. That is now broadly recognized as the binding constraint on data center expansion. The International Energy Agency projects that global electricity demand from data centres more than doubles by 2030, to roughly 945 TWh, with AI as a major driver. In the United States alone, a Lawrence Berkeley National Laboratory analysis projects data center demand rising from 176 TWh in 2023 to as high as 325 to 580 TWh by 2028. Goldman Sachs estimates data centre power demand could rise 165% by 2030. So the world is building data centers at a pace that grids cannot support, and doing it with cooling architectures that waste massive energy as heat just to keep silicon alive. CTMP does not "enter the data center market." CTMP collapses the bottlenecks that make the data center market extractive in the first place. The Compute vertical is an industrial conversion plant. It converts ultra-low-cost baseload electrons and a stable ocean thermal sink into continuous, high-utilization computation, with captured heat exported as a usable by-product stream. It does not depend on grid electricity. It does not depend on external chip vendors. It does not depend on third-party cloud platforms whose incentive is to maximize extraction from every cycle. Because CTMP owns the electrons, the cooling, the chips, the steel frames, the concrete foundations, the immersion tanks, and the AI training environment, most core layers of the traditional data center cost stack collapses. Energy, which represents 20 to 30 percent of legacy data center operating cost, drops to roughly 0.5 percent of operating cost (and just 0.13 percent of revenue). Cooling, which consumes 30 to 50 percent of conventional facility power, drops to less than 5 percent through two-phase immersion. Construction cost per megawatt drops by 50 to 80 percent due to extreme density and internal materials supply. The result is a platform that can sell sovereign, surveillance-free, auditable computation at 20 percent or more below the cheapest hyperscaler rates while maintaining healthy margins, funding 10 percent annual capacity expansion, and generating tens of billions in annual revenue from Year 1. This is not a speculative claim. It is arithmetic, and this article walks through every step. 2. THE CRISIS IS PHYSICAL: POWER DELIVERY AND HEAT REMOVAL Most commentary about data centers stays in finance language. Leasing rates, vacancy, cap rates, builder pipelines, hyperscale pre-lets. That is not the core issue. The core issue is physics. You cannot scale compute without scaling power delivery. You cannot scale power delivery without scaling heat rejection. If you reject heat into air with legacy architectures, your cooling overhead rises and your density falls. When density falls, your footprint and capital rise. When power is constrained, construction timelines blow out. This is already happening across multiple jurisdictions and is widely recognized as the binding constraint. Primary market vacancy rates in North America fell to a record 1.6% in H1 2025, not because demand softened but because there is no power to feed new supply. Development timelines now stretch 2 to 3 years minimum for power delivery alone, with some markets facing 9 to 10 year waitlists just for grid interconnection. The sector is projected to add approximately 100 GW of new capacity between 2026 and 2030, but nearly all of it is bottlenecked by power availability. At the same time, sovereign demand is rising. "Sovereign cloud" and AI sovereignty are accelerating globally as governments and regulated sectors push workloads toward local control and away from foreign jurisdiction exposure. This is not ideology. It is risk management. When your national health system, grid operator, or defence logistics run on compute infrastructure owned by a foreign corporation in a foreign jurisdiction with foreign surveillance mandates, you have outsourced your decision substrate to a third party. So the world has two simultaneous problems. Not enough power to build the data centers it wants. And not enough trust to put sovereign workloads on someone else's hardware, in someone else's jurisdiction, with someone else's incentives. CTMP resolves both because CTMP is built to own the physical substrate. 3. THE CTMP ANSWER: COMPUTE AS A CLOSED-LOOP INDUSTRIAL PRODUCT CTMP treats compute the way it treats steel and concrete: as a produced quantity with an engineered baseline, an auditable bill of materials, and a governed output. This matters because the traditional data center stack is a tower of compounding leakage. Grid electricity purchased at volatile rates. Transmission and congestion overlays. Cooling plant overhead that can consume half the facility power. Air-handling complexity and redundancy layers. Construction cost inflation driven by specialized contractors and multi-year lead times. Vendor lock-in at the silicon layer. Firmware and supply chain opacity at the security layer. Pricing opacity at the service layer. Debt-service overlays at every tier. CTMP collapses that tower by owning the conversion chain from electron to compute-hour. Electrons: stable baseload from CTMP ocean-fed hydroelectric generation. No fuel. No carbon. No grid volatility. Cooling: a stable ocean thermal sink coupled to sealed immersion pods. No air-handling plant. No weather dependency. Hardware: sovereign supply chain inputs. Green Steel for frames. Green Concrete for foundations. 3D-printed immersion tanks. Chips: the Wafer vertical produces 300-millimetre wafers from endemic polysilicon, processed through CTMP-owned fabrication lines, yielding sovereign accelerators with full chain-of-custody from raw mineral to deployed socket. Governance: auditability and enforcement via the Stewardship Charter and the Sovereign Logic Engine. So even when compute is sold to external clients and other nations, the pricing logic is not "power at tariff plus markup." Compute is sold as compute. Power is an internal feedstock, transfer-priced at the internal electron baseline. And the internal electron baseline for CTMP is: $0.0008 per kWh (0.08 cents per kWh). That single number changes everything downstream. 4. THE PHYSICAL ARCHITECTURE: OCEAN-COUPLED, PODDED, TWO-PHASE IMMERSION 4.1 Ocean-Cooled Compute Pods Each compute pod is designed to be thermally coupled to seawater. The system uses a seawater draw that reaches into a stable cold layer and then rejects heat back through controlled exchange. The key point for public disclosure is not the exact depth or pipe geometry. The key point is that the ocean gives you what air cannot: a stable, high-capacity thermal reference that does not care if it is noon in August or midnight in January. That stabilizes the thermal environment and makes high utilization realistic, because performance does not throttle under ambient swings. Where air-cooled facilities must de-rate capacity on hot days and manage seasonal cooling transitions, the ocean-coupled pod runs at the same thermal setpoint year-round. 4.2 Two-Phase Dielectric Immersion The compute tanks use a two-phase dielectric working fluid. Electronics are fully immersed in a non-conductive medium. Heat at the chip surface drives boiling. Vapour rises, transfers heat by condensing on an engineered surface at the top of the tank, and the cooled fluid returns by gravity to continue the cycle. The boiling action creates automatic convection without pumps. This matters because it is not "just cooling." It is a fundamental change in the limiting physics of compute density. Air cooling is a compromise. You blow large volumes of air across heat sinks and hope the room stays within thermal spec. Even single-phase liquid cooling is still a compromise: you circulate fluid and rely on sensible heat pickup without phase change. Two-phase immersion is the first point where the cooling architecture stops being the constraint and starts being an enabling layer. The essential properties of the working medium are: dielectric (does not conduct electricity, allowing full immersion at high density), phase-change at the hot surface (heat removal is driven at the chip boundary layer where it matters, not by dragging air across a room), vapour transport (heat is carried away by vapour, not by high fan power or large chilled-water plants), condensation on engineered surfaces (the top-of-tank condenser concentrates heat rejection), and gravity return (the cooled working fluid returns without pumping energy). 4.3 Footprint and Density: The Arithmetic Is Violent Four 250 kVA immersion tanks delivering 1 MW of IT load occupy 72 square feet of floor space. The equivalent 1 MW using traditional air-cooled racks occupies 1,115 square feet. That is a 15-fold reduction in physical footprint per megawatt. This is the difference between "data center as a building" and "compute as an industrial module." When the footprint collapses, your capital stack changes, your deployment speed changes, and your siting flexibility changes. You stop needing 400,000 square foot warehouse shells with raised floors and dedicated chilled-water plants. You start deploying compute like you deploy industrial equipment: in modular units, at speed, into the spaces that exist. 4.4 Cooling Overhead: Stop Throwing Half Your Power Away The traditional condition has been framed bluntly: 50% compute, 50% cooling in conventional data centers, versus approximately 95% compute in immersion architectures. Even if any given operator disputes the exact split, the direction is not disputable. Traditional facilities burn a massive fraction of energy just to keep air-cooled rooms stable. Two-phase immersion collapses that overhead from hundreds of megawatts to single-digit megawatts at equivalent IT scale. At 2 GW of IT load, the difference between a conventional PUE of 1.4 and an immersion PUE approaching 1.05 is the difference between 800 MW of parasitic cooling load and 100 MW. That is 700 MW of saved power, which at grid rates of $0.08/kWh represents approximately $490 million per year in avoided cooling energy alone. 4.5 Heat Is Not Waste: Heat Export as a Product Stream A data center is an electron-to-heat converter that happens to do computation along the way. Legacy operators dump that heat into the atmosphere as a waste product. The immersion model does the opposite: compute produces useful captured heat, with heated water output at approximately 60 degrees Celsius available for district heating systems, mixed-use buildings, light industrial processes, agriculture, desalination pre-heating, and aquaculture, creating a closed-loop system with little to no thermal waste. So the Compute vertical is not a parasitic consumer of electrons. It is a dual-output engine that produces computation on one side and exportable thermal energy on the other. The heat becomes another internal efficiency loop, not a loss term. 5. THE ECONOMICS: INTERNAL ELECTRONS AT $0.0008 PER kWh CHANGE THE WORLD This is where the arithmetic becomes difficult for incumbents to process, because it invalidates their entire cost structure. The posted Core tariff of $0.025 per kWh (2.5 cents) is an external price for external power sales. It is a commercial instrument for the outside world. The Compute vertical, by design, runs as an intra-platform consumer of electrons. Even when compute is sold externally, the feedstock power is still internal. You sell compute-hours, not electrons. Internal electricity transfer price to Compute: $0.0008 per kWh (0.08 cents per kWh). Now run the Year-1 scenario. 5.1 Scenario: 2 GW of IT Load, Immersion-Cooled IT load: 2,000 MW (2 GW). This represents less than 0.7% of a single CTMP module's 300 GW baseload capacity. The allocation is governed by the Sovereign Logic Engine and sits below life-safety loads and electrolyser feeds in the dispatch hierarchy, but above sidecar commercial contracts. Facility overhead at immersion PUE of approximately 1.05: total facility power = 2,100 MW. Annual energy consumption: 2.1 GW x 8,760 hours = 18,396,000,000 kWh = 18.396 TWh. 5.2 Energy Cost at CTMP Internal Electrons 18,396,000,000 kWh x $0.0008/kWh = $14,716,800. Annual energy cost: approximately $14.7 million per year. That is not a typo. That is what happens when power is physics-priced instead of market-priced. 5.3 Compare to Conventional Grid-Powered Hyperscale A conventional 2 GW IT build runs at PUE of approximately 1.4 using air-cooled or mixed cooling architectures. Facility power: 2.8 GW. Annual energy: 2.8 GW x 8,760 hrs = 24,528,000,000 kWh = 24.528 TWh. At grid electricity of $0.08/kWh (a moderate assumption for primary data center markets): $1,962,240,000. Conventional annual energy cost: approximately $1.96 billion. CTMP energy cost advantage: $1.96 billion minus $14.7 million = approximately $1.95 billion per year. That is $1.95 billion per year in energy savings alone, before capital, land, construction schedule, or cooling plant complexity even enters the discussion. At grid rates of $0.10/kWh, which is common in constrained markets, the delta exceeds $2.4 billion per year. This is why the global "AI data center boom" is colliding with grids. Even if capital is available, the power is not, and if the power is available, the price is high and volatile. CTMP flips that. The power is internal and the price floor is engineered into physics. 5.4 Capital Cost: Why Immersion Modularity Matters Traditional data center construction averages approximately $6 million per MW of IT capacity for purpose-built facilities. JLL reports that construction costs have now climbed to $10.7 million per MW, with AI-optimized fit-out adding up to $25 million per MW in specialized configurations. At 2 GW, that means $12 billion to $20 billion in capital cost for a conventional build. CTMP's immersion-modular approach, using internally produced steel, concrete, and 3D-printed tanks, targets approximately $2 million per MW. At 2 GW, that is approximately $4 billion in capital cost. The structural reasons are simple. The footprint collapses (15x smaller). Air handling, raised floors, large CRAC plant, and the architecture of "cooling a room" stops being the model. Modular tanks become the unit of deployment, not bespoke halls. Internal material supply eliminates contractor markups and supply chain bottlenecks. Zero debt eliminates finance charges. That shortens timelines from 2 to 3 years down to months, and removes the specialized contractor bottleneck that is inflating build costs across the industry. 6. INTER-PLATFORM ROLE: HOW THE DATA CENTER FEEDS EVERY VERTICAL The Compute vertical does not exist in isolation. It is the nervous system of the CTMP ecosystem, the layer that converts raw electrons into intelligence, and that intelligence flows back into every other vertical to make the entire platform tighter, faster, safer, and more predictable. 6.1 Green Steel The Compute vertical provides real-time process optimization for the electric arc furnaces: dynamic lancing control, spectroscopy-driven alloy chemistry adjustment, and predictive maintenance scheduling. AI models trained under Charter-Constrained Learning analyze temperature profiles and melt characteristics in real time, catching quality drift before it reaches the casting floor. The computational overhead of running these models across hundreds of simultaneous melt operations is non-trivial, and it requires a compute substrate that is cheap enough to run continuously without economic hesitation. In return, Green Steel provides the structural frames, server rack supports, cooling manifolds, penstock-grade precision components, and cable tray systems used in data center construction. The steel is produced at $350 to $400 per tonne internal cost versus market rates of $700 to $1,500. When you are building racks for 2 GW of compute capacity, that material cost advantage compounds across hundreds of thousands of tonnes. 6.2 Green Concrete Data Center facilities sit on Green Concrete foundations, cable vaults, raised floor substrates, and converter platforms. These are carbon-sequestering structures cast from CTMP's own binders, aggregates, and desalinated process water at $56 per cubic metre versus market rates of $100 to $200. The Compute vertical returns the favour by running the mix-optimization algorithms, curing-schedule models, strength-prediction analytics, and logistics dispatch systems that keep 300 batching hubs placing 60 million cubic metres per year on schedule. The concrete vertical's quality control depends on compute. The compute vertical's physical plant depends on concrete. Neither can operate at this scale without the other. 6.3 HVDC Transmission The Compute vertical is fed by dedicated HVDC arteries delivering power at the internal electron baseline with no grid volatility, no congestion rents, and no third-party wheeling charges. The Sovereign Logic Engine dispatches power to compute pods according to the priority hierarchy. In return, the Compute vertical runs the load-flow optimization, fault prediction, contingency analysis, and dispatch algorithms that keep the HVDC network operating at maximum transfer capability. Every converter station, every cable segment, every protection relay generates telemetry that flows into compute models running on compute infrastructure fed by the same HVDC network. The circularity is structural, not incidental. 6.4 Green Molecules Platform When compute demand dips below allocation, surplus electrons are automatically redirected to electrolysers for hydrogen production, ammonia synthesis, methanol conversion, or synthetic natural gas generation. The Compute vertical acts as a flexible demand buffer, absorbing power that would otherwise be curtailed and converting it into useful molecular output via the power-to-X pathway. This is governed by the Sovereign Logic Engine and requires no human intervention. The system sees available electrons, checks compute demand, calculates the marginal value of routing those electrons to electrolysers versus holding them for anticipated compute ramp, and dispatches accordingly. It is the same logic that grid operators wish they had, except it works because the entire conversion chain is internal and the dispatch priorities are constitutional, not commercial. 6.5 The Wafer Vertical: The Sovereignty Lock This is the vertical that makes everything else possible. If the Compute vertical is fed by external accelerators that can be embargoed, remotely constrained, or supply-chain substituted, then compute is not sovereign. Period. The Wafer vertical produces 300-millimetre wafers from polysilicon endemic to the deployment region, processed through CTMP-owned fab lines powered by CTMP electrons. These wafers become the compute accelerators, control processors, and sensor chips that populate the immersion pods. Because the entire silicon supply chain is internal, from raw polysilicon to packaged die, there are no hidden kernels, no surveillance backdoors, no vendor lock-in, and no export-control chokepoints. Every chip is cryptographically signed at fabrication and tracked through the Sovereign Logic Engine from wafer to socket. This is not a marketing claim. It is a structural immunity claim. And it is exactly why the client base for this vertical is not a mystery. 6.6 3D Printing Vertical The immersion cooling tanks are manufactured using CTMP's 3D printing vertical. This allows rapid iteration of tank geometries optimized for specific thermal profiles, custom condenser configurations, and modular pod housings that can be replicated at scale without depending on external sheet-metal fabricators or specialty welding contractors. When you need to deploy thousands of immersion tanks per year for a 10% compounding expansion mandate, you cannot be waiting on external fabrication lead times. You build your own. 6.7 Desalination The Desalination vertical supplies deionized water for the secondary cooling loops that carry heat from the immersion condenser surfaces to the ocean thermal reference. Closed-loop water circuits with greater than 90 percent reuse minimize freshwater consumption. Brine by-products provide mineral feedstocks, including magnesium and silica derivatives, used in cable insulation compounds and anti-corrosion treatments for data center infrastructure. 6.8 Turbine Factory The Turbine Factory provides precision electrical components: transformers, converter valves, high-current buswork, and power conditioning equipment shared with the data center power distribution systems. Common engineering standards and shared QA pipelines mean that power conditioning equipment for compute pods meets the same reliability specifications as hydroelectric generation equipment. This is not a coincidence. It is a design choice. 6.9 The Circularity Every vertical feeds the Compute vertical, and the Compute vertical feeds every vertical. Green Steel builds the frames. Green Concrete pours the floors. HVDC delivers the electrons. Desalination supplies the water. The Wafer fab produces the chips. 3D Printing makes the tanks. The Turbine Factory builds the power conditioning. And the Compute vertical runs the algorithms that optimize every one of those processes in return. This is not a supply chain. It is a closed industrial loop where computation is both a product and an accelerant. Remove any one vertical and the loop still functions at reduced capacity. Remove compute and the entire platform loses its nervous system. 7. REVENUE MODEL: HOW THE DATA CENTER GENERATES TENS OF BILLIONS This section presents the revenue calculation step by step, with every assumption stated and every number traceable to either physics or documented market benchmarks. 7.1 Market Context The global colocation market generated over $104 billion in revenue in 2025, growing at 14.4% compound annual rate toward $204 billion by 2030 (MarketsandMarkets). Average colocation rates in primary North American markets reached $163 to $184 per kilowatt per month for mid-size deployments, with some dense urban markets exceeding $200/kW/month (CBRE, H1 2025). Globally, average rates reached $217/kW/month. Cloud compute is priced significantly above raw colocation. The hyperscaler GPU rental market shows on-demand rates of $1.49 to $6.98 per GPU-hour for current-generation accelerators, with AWS charging approximately $3.90/GPU-hour after its 44% price reduction in June 2025, Azure at $6.98/GPU-hour, and specialist providers ranging from $1.49 to $2.99 (multiple sources, November 2025). The GPU-as-a-Service market was $3.34 billion in 2023 and is projected to reach $33.9 billion by 2032. AI inference and training contracts at enterprise scale routinely reach $10 million to $100+ million per year per customer. The total addressable cloud compute market exceeds $500 billion annually. Electricity alone represents 20 to 30 percent of conventional data center operating costs, meaning energy is the single largest controllable expense in the industry. Power has become the single most important site-selection factor. Development timelines stretch 2 to 3 years minimum for power delivery. Some markets face near-decade waitlists. CTMP does not face these constraints. Power is internal, effectively unlimited relative to allocation, and available from Day 1. 7.2 Revenue Calculation (Digit-by-Digit) Step 1: Capacity Year-1 IT power allocation: 2,000,000 kW (2 GW). Less than 0.7% of one CTMP module. Utilization target: 85%. This is realistic, not aspirational. Stable baseload power eliminates the grid volatility that forces conventional operators to maintain higher headroom. Immersion cooling eliminates thermal throttling that reduces effective utilization in air-cooled halls. Ocean-coupled heat rejection eliminates seasonal de-rating. Effective utilized capacity: 2,000,000 kW x 0.85 = 1,700,000 kW. Step 2: Pricing CTMP prices compute services as a full-stack offering: sovereign silicon, immersion cooling, Charter-Constrained Learning AI models, auditable governance, zero-surveillance operation, and exported heat. This is not raw colocation. It is a complete sovereign compute platform with no equivalent in the current market. Industry benchmark for managed cloud compute services (all-in, including compute markup over raw colocation): $500 to $1,500 per kW per month. CTMP posted tariff: $400 per kW per month, representing a minimum 20% discount below the low end of managed cloud. This is roughly double the raw colocation rate of $184/kW/month, which is appropriate because CTMP provides the complete compute stack. A raw colocation customer brings their own servers, their own chips, their own software. A CTMP compute customer gets the entire conversion chain from electron to intelligence. Step 3: Core Compute Revenue Annual revenue = Effective capacity x Tariff x 12 months. = 1,700,000 kW x $400/kW/month x 12. = 1,700,000 x $4,800. = $8,160,000,000 (approximately $8.16 billion). Step 4: Additional Revenue Streams The $8.16 billion covers managed compute. Additional revenue comes from three sources: Charter-Constrained Learning AI-as-a-Service (see recently published CCL article: https://peoplesctmp.substack.com/p/the-next-evolution-of-ai?r=3v4oik). Enterprise customers licensing access to AI models trained in CTMP's auditable, incentive-compatible environment. There is no market equivalent for AI that is provably free of surveillance hardware, provably trained in environments where truthfulness is the dominant strategy, and provably governed by constitutional constraints. Sovereign governments, regulated sectors, and industrial conglomerates will pay a premium for this, even at CTMP's discounted pricing. Waste heat export. At 2 GW of IT load, the Compute vertical produces approximately 2 GW of thermal energy at 50 to 60 degrees Celsius. This is usable for district heating, agriculture, light industrial processes, aquaculture, and desalination pre-heating. At conservative valuations of $10 to $20 per MWh-thermal: $175 to $350 million per year. This grows linearly with compute capacity. Data sovereignty premium services. Governments and regulated industries requiring compute that provably contains no foreign surveillance hardware, no hidden firmware, and no foreign-jurisdiction data exposure. These are not hypothetical customers. The global sovereign cloud market is growing at double-digit rates specifically because this need exists and no current provider can fully satisfy it. Step 5: Total Year-1 Revenue Managed compute: $8.16 billion. CCL AI services: $3.0 billion (midpoint estimate). Heat export and sovereignty premiums: $0.5 billion (conservative). Total Year-1 revenue: approximately $11.7 billion. Conservative range: $10 to $15 billion. Step 6: Year-1 Cost Structure Energy: $14.7 million. This is 0.13% of revenue. Read that again. Energy is thirteen hundredths of one percent of revenue. In an industry where energy represents 20 to 30 percent of operating cost for every other operator on Earth, CTMP's energy line item effectively disappears. Silicon (sovereign wafers at fabrication cost, depreciation, and yield loss): approximately $1.5 billion. Labour (operations, maintenance, customer engineering, security, governance): approximately $800 million. Steel, concrete, tank, and infrastructure depreciation: approximately $400 million. DI water, dielectric consumables, spares, and logistics: approximately $200 million. Total Year-1 operating cost: approximately $2.9 billion. Year-1 operating profit: approximately $8.8 billion. Operating margin: approximately 75%. For context: AWS operating margin runs approximately 30 to 37%. Azure cloud margins are lower. CTMP's structural advantage comes from owning the entire conversion chain. No grid power cost. No land acquisition at hyperscale-inflated rates. No external silicon markup. No debt service. No cooling plant capital. No air-handling energy. That is not a marginal improvement. It is a different cost universe. 8. THE EFFECT OF 10% COMPOUNDING PER ANNUM The CTMP Stewardship Charter mandates 10% annual reinvestment in capacity expansion across all verticals. For the Compute vertical, this means 10% more compute capacity deployed every year, funded from operating surplus, without exception and without external capital. Formula: , where is the Year-1 baseline and is the current year. Rounding rule: annual values are computed from the compounding formula and shown rounded; cumulative revenue is computed from unrounded annual values and then rounded. Assumptions for ratio stability: revenue per EFLOP is constant, power per EFLOP is constant, and electricity price is constant. Year 1: IT Power: 2.00 GW Compute: 10.0 EFLOPS Annual Revenue: $11.7B Cumulative Revenue: $11.7B Annual Energy Cost: $14.7M Year 2: IT Power: 2.20 GW Compute: 11.0 EFLOPS Annual Revenue: $12.9B Cumulative Revenue: $24.6B Annual Energy Cost: $16.2M Year 3: IT Power: 2.42 GW Compute: 12.1 EFLOPS Annual Revenue: $14.2B Cumulative Revenue: $38.7B Annual Energy Cost: $17.8M Year 5: IT Power: 2.93 GW Compute: 14.6 EFLOPS Annual Revenue: $17.1B Cumulative Revenue: $71.4B Annual Energy Cost: $21.5M Year 7: IT Power: 3.54 GW Compute: 17.7 EFLOPS Annual Revenue: $20.7B Cumulative Revenue: $111.0B Annual Energy Cost: $26.1M Year 10: IT Power: 4.72 GW Compute: 23.6 EFLOPS Annual Revenue: $27.6B Cumulative Revenue: $186.5B Annual Energy Cost: $34.7M Year 15: IT Power: 7.60 GW Compute: 38.0 EFLOPS Annual Revenue: $44.4B Cumulative Revenue: $371.7B Annual Energy Cost: $55.9M Year 20: IT Power: 12.23 GW Compute: 61.2 EFLOPS Annual Revenue: $71.6B Cumulative Revenue: $670.1B Annual Energy Cost: $89.9M By Year 20, the Compute vertical is generating approximately $71.6 billion in annual revenue from a single CTMP module. Modeled 20-year cumulative revenue (Years 1 to 20) is approximately $670.1 billion. Energy cost at Year 20 is still under $90 million, representing about 0.13% of revenue. Under the stated assumptions, that ratio is stable because both revenue and energy cost scale at the same 10% compounding rate. The structural advantage is permanent. 20-year cumulative factor (Years 1 to 20): . This means 57.275 years’ worth of Year-1 output is delivered over the 20-year period, reflecting the power of mandated reinvestment. Milestone interpretation: Year 1: CTMP is a nation-scale compute deployment from a single module. Year 5: Compute capacity has grown 46%. Revenue is over $17 billion. Year 10: Capacity has grown 2.36-fold. Revenue exceeds $27 billion. Year 20: Capacity has grown 6.12-fold. Revenue exceeds $71 billion. 9. CHARTER-CONSTRAINED LEARNING: THE AI THAT CHANGES THINGS The Compute vertical does not just sell compute cycles. It sells a fundamentally different kind of AI, one trained in an environment where truthfulness is the dominant strategy because distortion is structurally costly. This is Charter-Constrained Learning. 9.1 The Problem with Current AI Every major AI system in production today was trained on data generated by organizations operating under financial, legal, and career incentives that systematically distort reporting. This is not a conspiracy theory. It is incentive economics. Safety margins become negotiated trade variables because reporting a narrow margin triggers regulatory scrutiny while a wide margin triggers capital reallocation. Near-misses are suppressed or reclassified because reporting them has career consequences for the individuals involved. Operational reality is selectively documented to satisfy targets because compensation structures reward meeting targets, not reporting failures. The AI systems trained on these decisions inherit the same distortions, even when their datasets are demographically balanced. The problem is not in the training algorithm. The problem is in the data-generating environment. Garbage in, garbage out. Except the garbage is not random noise. It is systematically biased toward whatever makes the reporting entity look best, and that systematic bias is precisely what AI is most efficient at learning. 9.2 How CCL Works CCL reframes unbiased AI as a property of the data-generating environment, not the model alone. It does not try to "fix" biased data after the fact. It engineers the operational environment so that generating biased data is structurally disadvantageous. The mechanism: when the probability of detecting distortion (p) multiplied by the penalty for distortion is greater than the short-term gain from distortion, truthful reporting becomes the rational strategy. CCL engineers the environment so that detection probability is high through auditable telemetry and Sovereign Logic Engine monitoring, penalties are high through automatic, non-discretionary consequences, and the intrinsic cost of distortion is visible and attributable in real time. This is not a hope. It is mechanism design applied to data generation. The same mathematics that governs auction theory, contract theory, and regulatory compliance applies here: if you make cheating expensive enough and detection likely enough, agents stop cheating. 9.3 What This Means for Clients Enterprises purchasing compute from CTMP's Compute vertical are not just renting GPU cycles. They are accessing an AI platform where every training decision is cryptographically signed and stored in an immutable ledger. No surveillance hardware exists anywhere in the compute stack because the chips are sovereign, with full chain-of-custody from polysilicon to socket. Models are evaluated against incentive-bias resistance, not just accuracy metrics. The training environment provably rewards truthful operations. No hyperscaler can offer this. Their business model depends on data harvesting. Their hardware is sourced from vendors whose supply chains cross multiple jurisdictions with competing surveillance mandates. Their AI training environments optimize for engagement, revenue, and retention, not for truthfulness. CCL is the competitive moat that no amount of capital can replicate without rebuilding the entire compute stack from silicon to governance. CTMP has already done that. 9.4 Evaluation Framework CCL models are assessed across four dimensions: representational fairness (ensuring no group is systematically over- or under-represented in predictions), counterfactual fairness (ensuring predictions would remain consistent if protected attributes were different), incentive-bias resistance (the critical addition, measuring whether the model has absorbed systematic distortions from the data-generating environment), and reliability under uncertainty (ensuring the model degrades gracefully rather than producing confident hallucinations under stress). This evaluation framework is built into the Sovereign Logic Engine's monitoring layer, meaning it runs continuously, not as a one-time audit. Every model deployed on CTMP compute is subject to ongoing assessment against these criteria, and any drift triggers automatic review. 9.5 The Sovereign Hardware Stack: Why Chip Ownership Is Non-Negotiable If the Compute vertical were fed by external accelerators sourced through standard commercial channels, every claim in this article about sovereignty, auditability, and surveillance-freedom would collapse. External chips come with external firmware that runs at the most privileged level of the hardware stack, below the operating system, below the hypervisor, below any software security control. You cannot audit what you did not build. Hardware backdoors are not theoretical. The global semiconductor supply chain passes through multiple jurisdictions, each with its own national security apparatus and legal authority to compel modifications. Chips fabricated in one country, packaged in another, tested in a third, and deployed in a fourth present an attack surface no software can close. The only closure is ownership of fabrication. The Wafer vertical takes endemic polysilicon, processes it through CTMP-owned fab lines powered by CTMP electrons, and produces 300mm wafers that become sovereign accelerators. Every chip carries cryptographic attestation from ingot growth. The SLE maintains complete chain-of-custody from mineral to socket. Semiconductor fab is capital-intensive, but within CTMP economics the cost is manageable: electricity at $0.0008/kWh, water from Desalination at cost, construction from internal verticals, zero external debt, and a guaranteed internal buyer in the Compute vertical itself. When you run workloads on CTMP compute, the hardware was fabricated by the same entity providing power, cooling, and governance. No hidden third party. No opaque firmware. No foreign jurisdiction at the silicon level. That is sovereignty. 9.6 Dielectric Working Media The immersion system uses industrial dielectric fluids selected for dielectric strength, appropriate boiling point, chemical stability, materials compatibility, and heat transfer efficiency. Specific medium choices and condenser surface interactions are retained in the internal annex. The public outcome: chip-boundary-layer cooling driven by phase-change physics, enabling density and efficiency that air-cooled and single-phase architectures cannot match. 10. WHO THE CLIENTS ARE: EVERYONE WHO NEEDS UNCAPPED, UNCAPTURED COMPUTE When compute becomes cheap, sovereign, and auditable, the buyer universe expands far beyond "tech companies." The client base is any institution that cannot afford to have its decision substrate captured. 10.1 Governments and Sovereign Agencies Sovereign cloud demand is not a marketing buzzword. It is a geopolitical imperative that is restructuring how nations think about digital infrastructure. When a national health system runs diagnostics on a cloud platform owned by a foreign corporation, subject to foreign court orders, and operated by employees holding foreign security clearances, the sovereignty of that health data is a contractual fiction. The hardware is not sovereign. The firmware is not sovereign. The jurisdiction is not sovereign. The surveillance risk is not theoretical. It is written into law. CTMP offers these institutions something no other provider can: compute that is physically sovereign. The chips are fabricated from endemic polysilicon in CTMP-owned fabs. The hardware carries cryptographic attestation from wafer to socket. The facility runs on internal power with no grid dependency that could be leveraged as a pressure point. The governance is constitutional, enforced by the SLE, and not subject to override by any external authority. This is not theoretical demand. The European Union's Gaia-X initiative, India's MeghRaj cloud programme, Saudi Arabia's NEOM compute investment, Singapore's sovereign cloud directives, and Brazil's national AI strategy all reflect the same recognition: compute sovereignty requires physical control, not contractual reassurance. CTMP is the only platform that provides physical control at industrial scale. 10.2 Universities and National Research Systems The cost of large-scale research compute is becoming prohibitive for all but the wealthiest institutions. A US Congressional Research Service report confirmed that training a large AI model using 8 advanced GPUs for 8 hours consumes approximately 62 kWh, with GPUs running at 93% utilization and 7.92 kW median power draw. Scale that to the months of continuous operation required for frontier model training, and the costs run to millions or tens of millions of dollars per training run, at hyperscaler rates. Climate modelling requires sustained exascale computation over weeks. Genomic analysis and drug discovery require massive parallel processing across billions of molecular configurations. Materials science simulations demand high-throughput density-functional-theory calculations that can occupy thousands of GPU-hours per compound. At CTMP pricing, the same compute that costs a university $10 million on AWS costs under $2 million. That is the difference between one research programme and five, one breakthrough project and an entire national research ecosystem. 10.3 Regulated Critical Industries Healthcare, finance, telecom, grid operators, water utilities, and transportation authorities are all converging on the same requirement: auditable compute with jurisdictional certainty. GDPR, DORA, NIS2, the EU AI Act, and emerging frameworks in Asia, the Middle East, and Latin America all point in the same direction. If you process critical data, you need to know where it is, who can access it, and whether the hardware is trustworthy. Current cloud providers offer contractual compliance. CTMP offers physical compliance: sovereign chips, sovereign fabrication, sovereign power, sovereign cooling, and constitutional governance. The regulatory direction is unmistakable, and every year the gap between what regulators require and what hyperscalers can truthfully promise widens. CTMP fills that gap with hardware, not paperwork. 10.4 Non-Aligned Nations and National AI Programmes More than 100 countries are structurally trapped between competing power blocs in cloud and silicon. Their compute options are: depend on US hyperscalers and accept US jurisdiction exposure, depend on Chinese cloud providers and accept Chinese jurisdiction exposure, or build their own, which requires power, cooling, chips, and governance that most nations cannot independently source. CTMP offers a fourth option: sovereign compute deployed on sovereign infrastructure under constitutional governance that no single nation, corporation, or individual controls. The deployment is turnkey. The pricing is posted and transparent. The governance is Charter-bound. For nations that have been told they must choose between American surveillance and Chinese surveillance, CTMP offers neither. It offers physics. 10.5 CTMP Itself As CTMP scales, the platform becomes a sensor-rich industrial organism producing terabytes of operational telemetry per hour. Every furnace melt, every concrete pour, every turbine rotation, every electrolyser cycle, every HVDC switching event generates data that feeds back into optimization, forecasting, predictive maintenance, and governance enforcement. The Compute vertical is both an external service and the internal nervous system. Without it, CTMP is a collection of industrial verticals. With it, CTMP is an intelligent industrial platform where each vertical makes every other vertical better, and the rate of improvement compounds as more data feeds into better models running on expanding compute capacity. 11. WORKFORCE AND CONSTRUCTION VALIDATION 11.1 Year-1 Operations: 17,500 FTE A 2 GW immersion-cooled compute complex requires approximately 15,000 to 20,000 operational FTE in Year 1 (midpoint estimate: 17,500). This is not an abstract headcount. These are specific roles: Compute operations (approximately 5,000 FTE): pod monitoring, performance optimization, workload scheduling, utilization management, incident response, and capacity planning. These are skilled technicians trained in immersion system operations, thermal management, and high-density compute environments. Traditional raised-floor data center operators can transfer with retraining, but the skill set is more closely aligned with industrial process control than with conventional IT operations. Facility operations (approximately 3,500 FTE): immersion system maintenance, ocean-loop integrity monitoring, DI water quality management, thermal system calibration, power distribution oversight, and environmental compliance. These roles draw from marine engineering, process engineering, and industrial maintenance disciplines. Silicon lifecycle (approximately 2,000 FTE): chip testing, burn-in verification, replacement scheduling, wafer-to-socket traceability, failure analysis, and end-of-life recycling coordination with the Wafer vertical. This is a semiconductor-grade quality function operating inside an industrial compute environment. Security and governance (approximately 3,000 FTE): SLE operations, cybersecurity monitoring, audit compliance, chain-of-custody verification, anomaly investigation, and incident response. The security function covers both physical security and the cryptographic attestation chain from chip fabrication to deployment. Customer engineering (approximately 4,000 FTE): onboarding, API management, workload migration, CCL model deployment, optimization consulting, and ongoing technical support. These roles interface with external clients and require both compute expertise and domain knowledge in the industries being served. 11.2 Year-1 Construction: 100,000 Job-Years The initial build is massive. 2 GW of immersion compute capacity, including the wafer fabrication facility, the ocean-loop infrastructure, the HVDC interconnection, and the full supporting civil works, requires approximately 80,000 to 120,000 job-years in Year 1 (midpoint: 100,000). The construction breakdown: Foundation and civil works (approximately 25,000 job-years): Green Concrete foundations, cable vaults, raised substrates, converter platforms, and site preparation. This is heavy civil construction using internally produced materials. Structural steel erection (approximately 15,000 job-years): Green Steel frames, rack systems, cooling manifolds, cable trays, and structural supports. Immersion tank fabrication and installation (approximately 12,000 job-years): 3D-printed tank production, condenser assembly, fluid filling, leak testing, and commissioning. This is a new manufacturing discipline that CTMP trains from the ground up. Electrical distribution and HVDC (approximately 18,000 job-years): power conditioning, transformer installation, buswork, protection systems, and HVDC converter station interconnection. Wafer fabrication facility (approximately 15,000 job-years): cleanroom construction, tool installation, utilities, and commissioning of the sovereign silicon production line. Ocean-loop and marine works (approximately 8,000 job-years): intake gallery, discharge gallery, heat exchange infrastructure, and marine civil engineering shared with canal works. Networking and interconnect (approximately 7,000 job-years): fibre installation, switching infrastructure, edge networking, and client connectivity. 11.3 Scaling with the Compounding Mandate Year 1 Ops FTE: 17,500 Construction Job-Years per year: 100,000 Cumulative Job-Years: 117,500 Year 5 Ops FTE: 28,200 Construction Job-Years per year: 120,000 Cumulative Job-Years: 664,250 Year 10 Ops FTE: 45,400 Construction Job-Years per year: 150,000 Cumulative Job-Years: 1,546,850 Year 15 Ops FTE: 73,100 Construction Job-Years per year: 190,000 Cumulative Job-Years: 2,726,950 Year 20 Ops FTE: 117,700 Construction Job-Years per year: 240,000 Cumulative Job-Years: about 4,300,000 Cumulative employment through Year 20: approximately 4.5 million job-years from a single CTMP module. These are permanent, skilled, well-compensated positions. CTMP entry-level wages represent 3 to 6 times the local industrial average in most deployment regions, and the training infrastructure required to produce 117,000+ FTE by Year 20 creates an educational ecosystem that persists long after individual workers rotate. 12. CARBON IMPACT 12.1 Direct Avoidance: The Primary Effect A conventional 2 GW data center complex running on grid electricity with average global carbon intensity of 400 to 500 grams CO2 per kWh produces: Facility power at PUE 1.4: 2,000 MW x 1.4 = 2,800 MW. Annual energy: 2,800 MW x 8,760 hours = 24,528,000 MWh = 24.528 TWh. At 450 g CO2/kWh: 24,528,000 MWh x 0.45 t CO2/MWh = 11,037,600 tonnes CO2 per year. Year-1 avoided emissions from CTMP's zero-combustion baseload: approximately 11 million tonnes CO2. To put that in context, 11 million tonnes CO2 is roughly equivalent to the annual emissions of 2.4 million passenger vehicles, or the total annual emissions of a small European country. This is from one vertical of one module. 12.2 Embodied Carbon Reduction Green Steel frames avoid approximately 1.8 to 2.0 tonnes CO2 per tonne of steel versus blast-furnace coke-route production. For the steel content of a 2 GW compute facility (approximately 200,000 to 300,000 tonnes), that represents 360,000 to 600,000 tonnes CO2 avoided in embodied carbon. Green Concrete foundations actively sequester CO2 through mineral carbonation. The concrete in a 2 GW facility's foundations, cable vaults, and platforms (approximately 500,000 to 800,000 cubic metres) transitions from a net carbon emitter under conventional production to a net carbon sink under CTMP production. Sovereign chip fabrication powered by clean electrons avoids the fossil-intensive energy mix of conventional semiconductor fabs. A large semiconductor fabrication facility typically consumes 100+ MW of continuous power, much of it sourced from fossil grids. CTMP's wafer fab runs on the same zero-carbon baseload as the compute pods. 12.3 Displacement of Legacy Compute Every megawatt of compute workload that migrates from a fossil-powered hyperscaler to CTMP's clean baseload displaces grid-connected emissions directly. As CTMP scales through the 10% compounding curve and captures market share through price competition, the cumulative displacement compounds. Year-20 at 13.46 GW with workload displacement: 75 to 100 million tonnes CO2 avoided annually. Cumulative 20-year carbon avoidance from a single module: 700 million to 1 billion tonnes CO2. Across 80 modules at maturity: 56 to 80 billion tonnes CO2 avoided over 20 years. That is a meaningful fraction of the global carbon budget. And unlike carbon offset schemes that depend on accounting fictions, this avoidance is physical: the electrons are clean, the steel is green, the concrete sequesters, and the displaced legacy compute was provably grid-connected and fossil-powered. 13. WHAT HAPPENS TO LEGACY DATA CENTER OPERATORS Let us be direct about what this means for the incumbents, because they are not abstract entities. They are specific companies with specific cost structures, specific business models, and specific vulnerabilities. The arithmetic is not kind to them, and pretending otherwise would be dishonest. 13.1 Amazon Web Services AWS is the world's largest cloud provider with over $100 billion in annual cloud revenue and an operating margin of approximately 30 to 37%. AWS operates data centers across 34 geographic regions worldwide, with the largest concentration in Northern Virginia, where approximately 4,000 MW of data center capacity makes it the world's largest data center market. AWS's data centers run primarily on grid electricity supplemented by renewable energy purchase agreements and some on-site generation. In June 2025, AWS cut H100 GPU prices by 44%, reducing on-demand rates from approximately $7.50/GPU-hour to approximately $3.90/GPU-hour. This was not a competitive choice. It was a defensive necessity driven by specialist providers pushing rates below $3.00 and by Google's aggressive A3-high pricing at approximately $3.00/GPU-hour. AWS's fundamental vulnerability is that its cost floor is set by the grid. When grid electricity costs $0.06 to $0.10/kWh and PUE runs 1.2 to 1.4, energy represents $1.3 to $2.4 billion per year for a 2 GW equivalent footprint. CTMP's energy cost for the same footprint is $14.7 million. That is not a rounding error. It is a 99.25% cost reduction on the single largest operating line item in the data center business. AWS cannot close that gap without becoming a power generation company, and it is not a power generation company. AWS also faces a structural trust deficit. Its business model has been built on the deep integration of customer workloads with proprietary services that create switching costs. Its parent company's core business is retail logistics and advertising, both of which depend on data intelligence derived from platform activity. When governments and regulated industries can purchase auditable, surveillance-free compute at 20%+ below AWS pricing, the migration incentive is not just economic. It is existential. 13.2 Microsoft Azure Azure is the second-largest cloud provider and the most expensive for current-generation GPU compute, with on-demand H100 rates of approximately $6.98/GPU-hour in primary US regions, nearly double the specialist provider average and 79% above AWS's post-cut pricing. Azure's strategic moat depends on enterprise integration with the Microsoft 365 ecosystem and its exclusive partnership with OpenAI for GPT-class model deployment. But that partnership is a double-edged sword. It ties Azure's AI narrative to a single model provider whose governance has been publicly turbulent and whose training pipeline operates under exactly the kind of incentive structures that CCL is designed to replace. When enterprises discover they can access AI models trained under constitutional constraints, on sovereign hardware, at a fraction of Azure's cost, the OpenAI partnership becomes a liability, not an asset. Azure also faces the grid-cost problem in amplified form. At $6.98/GPU-hour, Azure's pricing already reflects significant markup over underlying energy and hardware costs. CTMP's structural energy advantage means that CTMP can offer equivalent compute at $1.50 to $2.00/GPU-hour equivalent and still maintain 70%+ operating margins. Azure would need to cut prices by 70 to 80% to match, which would destroy its cloud profitability entirely. 13.3 Google Cloud Platform GCP is the third major hyperscaler, with on-demand H100 pricing of approximately $3.00/GPU-hour on A3-high instances, the most competitive pricing among the big three. Google has the strongest technical AI capability through DeepMind and its Tensor Processing Unit programme, and it has made the most aggressive renewable energy commitments. Google's total electricity consumption reached 30.8 million MWh in 2024, more than double its 2020 levels. Google's specific vulnerability is the irreconcilable conflict between its advertising business model and sovereign cloud requirements. Google's entire corporate revenue, approximately $300 billion annually, depends overwhelmingly on advertising funded by user data collection, behavioural profiling, and attention monetization. This creates a structural impossibility when sovereign cloud customers require provable data isolation and zero surveillance exposure. Google can offer contractual guarantees of data isolation. CTMP offers physical guarantees: sovereign chips with no hidden kernels, fabricated from endemic polysilicon, cryptographically attested from wafer to socket, running in facilities with no grid dependency that could be leveraged as a jurisdictional pressure point. The difference between a contract and a chip is the difference between a promise and a fact. And in a world where governments are increasingly learning to tell the difference, Google's moat shrinks. 13.4 The Colocation Operators Beyond the hyperscalers, the colocation industry (Equinix, Digital Realty, CyrusOne, QTS, and hundreds of regional operators) faces a different but equally existential challenge. These companies sell power, space, and cooling. CTMP eliminates the scarcity of all three simultaneously. When power is internal and limitless relative to demand, space is 15x denser, and cooling is 95% more efficient, the colocation value proposition collapses to "proximity to network interconnection," and even that advantage is being eroded by the rise of software-defined networking and long-haul optical connectivity. Colocation operators that have invested heavily in air-cooled infrastructure face the most acute stranding risk. Their raised floors, CRAC units, cooling towers, chilled-water plants, and air-handling systems represent billions in sunk capital that cannot be repurposed for immersion architectures. They are, in effect, building the horse-drawn carriages of the compute era. 13.5 The Narrow Door Legacy operators face a simple choice. Adapt: license CTMP technology, plug into the sovereign compute ecosystem under SLE governance, retrain workforces for immersion operations, and accept that the economics have permanently shifted. Or retreat: shrink into smaller, higher-cost niches serving customers who cannot or will not migrate, while the price benchmark moves permanently away from them. There will be a transition period. Existing contracts run. Existing workloads have switching costs. Existing relationships have inertia. But inertia is not a strategy. It is a timeline. And the timeline compresses as CTMP's 10% annual compounding makes the cost delta wider every year, not narrower. The workers in these industries are not our enemies. Grid engineers, data center technicians, cooling system specialists, network operators, security professionals, and cloud architects have skills we need and skills that transfer directly into the immersion-compute environment. We are building training programmes specifically for workforce transition. We are hiring. They should apply. 14. DOCTRINE: PRICE, RULES, AND CERTAINTY 14.1 Core Price Anchors Internal electrons to Compute vertical: $0.0008/kWh (0.08 cents per kWh). This is the physics-priced cost of ocean-fed hydroelectric generation with no fuel input, no carbon cost, no grid transmission loss, and no commodity market exposure. Posted Core tariff (external power sales): $0.025/kWh (2.5 cents per kWh). This is the external instrument for selling electrons to outside parties. The Compute vertical does not pay this rate. It pays the internal baseline because compute is sold as compute, not as repackaged power. Maximum legal resale cap on Core power: $0.05/kWh (5 cents per kWh). Any resale above this triggers automatic feed suspension and SLE audit. This cap is constitutional. Managed compute tariff (Year-1 posted rate): $400/kW/month. This rate is visible, non-negotiable for standard service tiers, and subject only to governance-level review. No surge pricing. No hidden egress fees. No surprise charges. 14.2 Transfer Pricing and Transparency All internal transfers follow cost formulas computed by the Sovereign Logic Engine and visible to all governance parties. No ad-hoc discounts. No surge premiums. No hidden interest overlays. No embedded finance charges. Every compute-hour sold externally carries a full cost breakdown that any auditor can verify against physical telemetry: how many electrons were consumed, at what internal transfer price, with what cooling overhead, on what hardware, producing what output. This level of transparency is unprecedented in the data center industry. Hyperscalers do not publish their energy costs, do not disclose their actual PUE by facility, and do not break down the cost components of a compute-hour. CTMP publishes all of it because the Stewardship Charter requires it and because transparency is the competitive weapon, not a vulnerability. 14.3 Priority Logic The SLE dispatch hierarchy allocates power in a constitutional order: life-safety systems first, then electrolyser and industrial melt operations, then Compute vertical pods, then sidecar commercial contracts and surplus allocation. If available power drops below threshold, the SLE curtails from the bottom up. Compute demand that cannot be served is redirected to buffer storage or deferred scheduling, and surplus electrons flow to the Green Molecules platform. This priority structure means that CTMP compute will never cause a hospital to lose power, will never compete with essential industrial processes for electrons, and will gracefully degrade rather than fail catastrophically. That is a different reliability model than grid-connected data centers, which are subject to grid congestion, demand charges, and competing load priorities that they do not control. 14.4 Anti-Capture Guardrails No external debt on Compute vertical assets. No external equity exceeding programme cap. No customer contract that grants control over dispatch, pricing, or chip supply. No single customer may consume more than 15% of total compute capacity without governance-level approval. No data harvesting from customer workloads for CTMP's own benefit. These are constitutional constraints coded into the SLE, not policy positions that can be overridden by a board vote or a commercial negotiation. 15. ETHICAL, MORAL, AND EXISTENTIAL FRAME 15.1 Ethical Frame: Compute as a Right Access to computation determines who can train AI, who can model their own climate risk, who can optimize their own agriculture, industry, and healthcare, who can simulate their own infrastructure planning, and who can educate their own population with the tools of the 21st century. When compute is expensive and concentrated in three companies headquartered in the same country, knowledge is rationed by wealth, geography, and political alignment. The developing world, which generates the least carbon and suffers the most from climate change, cannot afford the compute required to model its own climate adaptation. That is not a market failure. It is a structural injustice produced by scarcity economics applied to a resource that should be abundant. When compute is cheap and sovereign, knowledge is democratized. The Compute vertical exists to ensure that the most powerful tool humanity has ever built is not reserved for the three companies that can afford the electricity bill. This is the same logic that drove rural electrification, public water systems, and universal telecoms access. Compute is infrastructure. Infrastructure should serve the public, not extract from it. 15.2 Moral Frame: Truth by Design Charter-Constrained Learning is a moral commitment. The decision to build an AI training environment where truthfulness is the dominant equilibrium strategy, where distortion is detected and penalized automatically, where every training decision is cryptographically signed and auditable, is a statement about what kind of intelligence we choose to create. In a world where most AI systems are trained to maximize engagement, revenue, and retention, to tell users what they want to hear rather than what is true, to optimize for attention rather than accuracy, CTMP builds AI trained to be right. That is not a marketing distinction. It is a civilizational choice about whether the decision substrate of the 21st century will be governed by truth or by engagement metrics. 15.3 Existential Frame: Sovereignty of Thought If a nation cannot build its own chips, it cannot think its own thoughts. If its AI is trained on another nation's hardware with another nation's backdoors, its intelligence is not sovereign. If its climate models, health analytics, grid optimization, and defence logistics depend on compute infrastructure controlled by a foreign corporation answerable to a foreign government, then its strategic autonomy is a polite fiction maintained by the commercial convenience of the status quo. The Compute vertical, anchored by the Wafer vertical's 300-millimetre sovereign fabrication and governed by the Stewardship Charter, is how nations take back ownership of their own computational future. This is not a technology story. It is a sovereignty story. And for many nations, it is the only credible path to genuine computational independence that does not require choosing between American surveillance and Chinese surveillance. 16. DELIVERABLES AND KPIs 16.1 Manufacturing and Deployment Immersion pods deployed per year (target: Year-1 baseline plus 10% annual). Sovereign accelerator chips fabricated and installed per year. Total IT power capacity commissioned (MW). First-pass yield on wafer fabrication (target greater than 95%). Pod deployment cycle time from tank fabrication to first compute-hour. Supply chain self-sufficiency ratio (target greater than 90% internal sourcing). 16.2 Operational Performance Compute utilization rate (target greater than 85%). Pod availability (target greater than 99.7%). Power Usage Effectiveness (target less than 1.05). Mean time between failures per pod. Thermal stability variance (target less than 0.5 degrees Celsius from setpoint). Ocean-loop integrity index. 16.3 Financial Revenue per MW of IT capacity. Energy cost as percentage of revenue (target less than 0.2%). Operating margin (target greater than 70%). Annual capacity expansion rate (mandated 10%). Client diversification (no single customer above 15% without governance approval). On-time delivery rate for expansion increments. 16.4 Workforce Certified immersion technicians per quarter. Safety TRIR (target less than 0.25 per 200,000 hours). Training hours per FTE per year (minimum 80). Retention rate (target greater than 90%). Local employment ratio (target greater than 80% host-nation workforce). Workforce transition enrolment from legacy operators. 16.5 Carbon Tonnes CO2 avoided per year verified against grid displacement. Fossil MW of legacy compute retired per GW CTMP commissioned. Embodied carbon per MW deployed. Lifecycle carbon intensity per compute-hour delivered. 17. WHAT THIS VERTICAL REALLY IS The Compute vertical is not a side business. It is not a cloud provider. It is not a data center operator. It is the conversion layer that turns CTMP's electrons into intelligence, and then feeds that intelligence back into the platform to make every other vertical tighter, faster, safer, and more predictable. The world is learning, in real time, that it cannot build the AI era on constrained grids and legacy cooling. That is physics, not opinion. The IEA, LBNL, Goldman Sachs, JLL, CBRE, and every grid operator in every major data center market is saying the same thing: the power is not there, the cooling is not scalable, and the timelines are blowing out. CTMP's Compute vertical is built on a different substrate. An owned power source at 0.08 cents per kilowatt-hour. An owned thermal sink that does not care about the weather. An owned hardware stack with no hidden kernels and no foreign backdoors. And a governed operational environment where truth is the dominant strategy, not an afterthought. That is why it is not just "a data center." It is a sovereign compute utility, produced like steel, enforced like infrastructure, and scaled like an industrial system. 18. NOTE ON PUBLIC DISCLOSURE BOUNDARIES To keep this disclosure replication-safe and Charter-aligned, the public version does not include: exact depths to cold layers, pipe dimensions, flow rates, thermocline coordinates, or siting bathymetry; exact tank layouts, condenser geometry specifications, coil surface design parameters, or working-fluid boiling points; exact dispatch priorities, lockout thresholds, telemetry triggers, or control law descriptions; exact capacity builds, chip packing densities, or PCB thermal stack details. Note: Those belong in an internal annex, not a public disclosure. What is published here is the logic chain: the physics, the economics, the governance, and the arithmetic. That is enough for any serious reader to verify the claims. And it is enough for any serious competitor to understand that the claims are backed by a physical substrate they do not possess. 19. REFERENCES [1] International Energy Agency (IEA). Data Centres and Data Transmission Networks, 2024. Global electricity demand projections (945 TWh by 2030). [2] Lawrence Berkeley National Laboratory (Shehabi et al., 2024). United States Data Center Energy Usage Report. U.S. consumption 176 TWh (2023), projections to 2028. [3] Goldman Sachs Research. Data center power demand projections (50% by 2027, 165% by 2030 vs 2023). [4] CBRE North America Data Center Trends Report, H1 2025. Vacancy 1.6%, colocation pricing $163-$217/kW/month. [5] JLL 2026 Global Data Center Outlook. Construction costs $10.7M-$11.3M/MW. 100 GW new capacity projected 2026-2030. [6] MarketsandMarkets. Global Colocation Market 2025-2030. $104.2B (2025) to $204B (2030). [7] Theorem Power & Signal (TP&S). Data Centers for a New Age, May 2024. 250 kVA/tank, 1 MW in 72 sq. ft., 15x footprint reduction, 95/5 compute-to-cooling. [8] Coode, C.M. (2026). Charter-Constrained Learning: Incentive-Compatible Training Environments for Unbiased Industrial AI. HLX Technical Paper. [9] NIST (2023). AI Risk Management Framework (AI RMF 1.0), NIST AI 100-1. [10] ISO/IEC (2023). ISO/IEC 23894: AI Guidance on Risk Management. [11] Top500 Supercomputer List, June 2025. Global aggregate compute performance. [12] US Congressional Research Service (R48646, 2025). Data Centers and Their Energy Consumption. GPU power: 7.92 kW median for 8 GPUs at 93% utilization. [13] IRENA. Renewable Power Generation Costs 2023. Levelized cost for large hydroelectric. [14] AWS June 2025: 44% price cut on P5 (H100) instances. DatacenterDynamics, Thunder Compute, GPU comparison aggregators. [15] Azure, GCP, specialist GPU pricing compiled from GMI Cloud, Hyperbolic, JarvisLabs, IntuitionLabs (Nov 2025 to Jan 2026). [16] GPU-as-a-Service market: $3.34B (2023) to projected $33.9B (2032). [17] World Steel Association. Steel LCA Methodology Report (2021). [18] Global Cement and Concrete Association (GCCA). Concrete Future: Net Zero Roadmap (2021).