A single composite, 0-100, ranking the 54 African countries for long-horizon conservation-capital deployment. Published with the formula, the weights, the null-handling rules, the alternatives considered and rejected, and the failure modes we know about.
Thesis. A funder allocating capital into African conservation today has to synthesise eight dimensions of political risk, nine dimensions of protected-area performance, a national carbon-market framework, and the density of ongoing deployment activity, to make a single decision: does this country clear the bar? Every serious allocator performs this synthesis. Most do it implicitly, inside their head, in ways that do not survive scrutiny when the next analyst has to rebuild the case. CCS does it explicitly.
CCS is not a number we believe more than the practitioners who use it. It is a number we publish so the practitioners who use it can disagree with it productively, recompute it under different weights, and hold us to account when the ranking shifts. The formula is simple on purpose. Simple composites rank better than elaborate ones. Known-direction signals beat hidden black boxes.
What CCS is for. Country screening, allocation-weighting, and pipeline triage. What CCS is not for. Project-level underwriting, operator selection, or any decision that turns on within-country variance. For that, use PACE, Funders, Who's Who, and the intelligence feed directly.
CCS is a weighted average of four component scores, each on a 0-5 scale, renormalised to 0-100. The formula in full:
Weights sum to 1.00 by construction. They are not optimised, they are chosen. Their justification sits with each component below. We published this formula with the intent that a reader who disagrees with the weights can recompute CCS in a spreadsheet in five minutes with the published component scores on each country page.
Four components. Each sourced from existing Canopy proprietary scoring or from public data with a documented extraction rule. No component weight is arbitrary; the rationale is stated against each.
composite_score. Already on the required 0-5 scale.composite_score across PAs rated in the country. No operator weighting, no area weighting.operational = 5.0 ·
agreement-signed = 4.0 ·
in-negotiation = 3.0 ·
framework-only = 2.0 ·
no-framework = 1.0
no-framework (1.0). The policy tracker covers all 54 African countries so missing entries should not occur.min(5, 2.5 × log10(n + 1)), where n = count of tracked projects. This yields 0.0 for zero projects, 0.75 for 1, 2.5 for 10, 3.7 for 30, 5.0 at 100+. Log-scaling prevents Kenya (36 projects) from dominating Comoros (1 project) on this axis by more than the signal warrants.donor_env dimension score, direct use. Already 0-5.
The CCS 0-100 number is the audit form. The four-tier band is the citation form. Either can be used in isolation; both are always published together.
| Tier | CCS range | Label | Interpretation |
|---|---|---|---|
| Tier 1 | 65.0 and above | Deployment-ready | The country clears a high bar on the forward-looking components and shows active deployment. Standard due diligence suffices. Expected top ~15% of 54 countries. |
| Tier 2 | 50.0-64.9 | Deployable with mitigants | One or more components materially below continental median, but other components compensate. Specific mitigants (structure, insurance, co-investor profile) required. Expected ~25% of 54. |
| Tier 3 | 35.0-49.9 | Selective deployment | Deployment defensible only for specific strategies (community-conservancy focus, blended-finance structures, donor-led vehicles). Not a passive-allocation destination. Expected ~35% of 54. |
| Tier 4 | Below 35.0 | Monitoring-only | Political, legal, or capital-flow environment unsuitable for long-horizon conservation capital deployment under current conditions. Monitor for inflection. Expected ~25% of 54. |
The tier thresholds are absolute, not percentile-based. A country's tier does not change because other countries' CCS changed. This is deliberate. A percentile-based system would mean a country could drop a tier because another country improved, which is nonsensical for an allocation decision. We report continental percentile alongside the absolute tier for context only.
For press, pitch decks, and third-party reference, use any of:
Canopy requests attribution on any public reproduction of CCS or its components. See the corrections policy.
Composite scores live and die by how they handle missing data. Every CCS component has an explicit null rule, published above and restated here for the reader who wants to know the failure modes before trusting the number.
Continental median imputation. Chosen over zero-fill (which would penalise low-coverage countries unfairly) and over exclusion from CCS (which would shrink the universe). The continental median is recomputed quarterly from the published PACE cohort. Imputed countries are flagged with a coverage pip indicator on every page.
Log-scaling prevents sample-size distortion. A country with 1 tracked project scores meaningfully above zero (reflecting the presence of deployment signal) without claiming parity with a country that has 20. Countries with zero tracked projects score zero on the projects sub-score; this is penalising-by-design, not an accident.
Seychelles, Sao Tome and Principe, Comoros, Cabo Verde, and Mauritius have limited PA cohorts, limited project pipelines, and small absolute capital-deployment footprints. Their CCS scores should be read with the coverage pip prominently in mind. A Tier 1 Seychelles and a Tier 1 South Africa are both defensible but for different reasons.
Somalia, South Sudan, Libya, Sudan. RISK composite already captures the relevant information at 40% weight. CCS does not apply a separate conflict override. We considered a hard floor (any country with RISK composite below 2.0 forced to Tier 4) and reject it as redundant: the weighted formula already produces Tier 4 for these countries without a hand-tuned cutoff.
Components refresh asynchronously. RISK is scored on human-review cycles (roughly quarterly), PACE on PA-addition cycles (monthly to quarterly), POLICY on tracker updates (rolling), DEPLOYMENT on projects-directory updates (rolling). CCS is recomputed on every component update and time-stamped. The number on any dossier is the CCS as of the last component refresh for that country, not a static quarterly number.
A methodology page that shows only the chosen formulation invites the reader to imagine that no alternatives were considered. Here are the ones we considered and why they lost.
The equal-weighting prior is tempting because it avoids the appearance of editorial judgement. But equal weights is itself an editorial judgement, and a weaker one. RISK carries more signal than POLICY on a ten-year horizon; pretending otherwise produces a rank order that practitioners would override by hand, which is worse than an opinionated index they can disagree with explicitly.
Percentile bands mean a country can drop a tier because another country improved. For allocation decisions this is nonsensical. Absolute thresholds preserve the meaning of the tier across time. We report percentile as supplementary context only.
Geometric mean has an attractive property: any component scoring zero forces CCS to zero. This overweights single-component failures in a way that would produce unhelpful rankings (a country with deep RISK strength and a pending policy framework would be penalised disproportionately). Arithmetic mean is the right choice for a composite where components are complements, not gates.
CCS should answer "how deployable is this country" not "how big is the addressable opportunity". Size is a separate question and should drive allocation weighting downstream of CCS, not be bundled into it. A small well-governed country is not a worse destination per dollar than a large poorly-governed one.
S&P/Moody's/Fitch sovereign ratings correlate heavily with RISK tier-1 dimensions (governance, donor environment), so adding them would double-count. We considered using them as a cross-check but ultimately they do not add information to the composite.
We considered adding a component tracking announced-but-unclosed operator expansions (African Parks next-PA commitments, new concession awards). Rejected on grounds that announcement-based data is noisy and reward-hackable. If a pipeline matures it will show up in DEPLOYMENT within 12-18 months; until then it shouldn't move the score.
Any composite lies in some direction. Here are the directions we know CCS lies in, published so practitioners can adjust.
in-negotiation to agreement-signed, its POLICY score jumps from 3.0 to 4.0, adding 3 points to CCS in one step. This is correct (the event is discrete) but produces visible score jumps on announcement days. Readers tracking CCS time-series should expect stepwise movement on the POLICY axis, not smooth drift.CCS is not the first index that ranks African countries. It is the first purpose-built for conservation capital. The relevant peers and what they miss:
CCS is designed to be the reference index a conservation-finance practitioner reaches for first. When CCS disagrees with WGI, EIU, or CPI, the disagreement is itself signal: those indices were built for different decisions.
How CCS changes over time, and under what circumstances.
CCS recomputes on every refresh of any underlying component. No manual trigger. Each country page shows the last-updated timestamp for its CCS, accurate to the day.
Weights, band thresholds, and null-handling rules are treated as version-controlled. Any change to these is published with (a) the old value, (b) the new value, (c) the reason for the change, (d) the effective date. Historical CCS under old methodologies is preserved on request.
The current version is CCS v1.0, effective 23 April 2026. Material revisions will bump the version number (v1.1, v2.0, etc).
Past CCS values are not re-stated under new methodologies. A country's "CCS movement" time-series will always compare like-for-like within a single methodology version. Cross-version comparisons are disclosed explicitly.
Every CCS component has a source trail. Every source-trail entry is reviewable. Every review is open to challenge.
Human review. RISK and PACE composites are LLM-drafted and human-reviewed before publication. POLICY is editorial input based on public tracker data. DEPLOYMENT is derived from Canopy's projects directory, which is human-curated from public registries. CCS is a pure composition step; the composition itself is deterministic.
Source trail. Every RISK dimension note carries source URLs. Every PACE dimension note carries source URLs. The POLICY tracker is editorial. The projects directory cites registry IDs and issuer pages. A reader disputing a CCS value can trace it back to the underlying evidence in 2-3 clicks.
Corrections. Errors should be reported to corrections@canopy.africa. Fourteen-day response SLA. Verified corrections are merged into the next CCS refresh with public note of the change.
Disputes. Any country or operator disputing its CCS, any component score, or any source citation can request a formal review. Canopy will publish the dispute, the review process, the decision, and (if warranted) the correction. Canopy does not suppress disputes.