Climate modeling on open datasets

Regional and global climate simulations using openly available reanalysis data, model intercomparison runs, and downscaling experiments. Long-running, parallelizable, and naturally checkpointable — well-suited to a preemptible batch allocation.

Epidemiological simulation

Agent-based and compartmental models for disease spread, intervention scenarios, and wastewater surveillance analytics. Bursty workloads that benefit from elastic capacity rather than dedicated infrastructure.

Large-scale scientific computation

Computational physics, chemistry, genomics pipelines, and materials discovery sweeps. Embarrassingly parallel by structure, with discrete units of work that survive preemption cleanly.

Public-interest machine learning

Training and fine-tuning of models for public-good applications — medical imaging triage, accessibility tooling, language preservation, multilingual public-service models. Open weights and open methodology are part of the grant.

Socio-economic modeling

Microsimulation models, labor-market analytics, transport-network simulations, and policy-impact studies built on open statistical data. Periodic re-runs as new data arrives map cleanly to rolling allocation cycles.

Open research analytics

Reproducible analyses on public scientific datasets, benchmark execution, and infrastructure for open replication studies. Capacity here directly supports the verifiability of published research.

All workloads above are illustrative. Approval depends on the specific project and governance review — not on the category alone. All work runs as preemptible batch jobs and must tolerate restart.