Canopy Proprietary · Methodology

Canopy Country Score

A single composite, 0-100, ranking the 54 African countries for long-horizon conservation-capital deployment. Published with the formula, the weights, the null-handling rules, the alternatives considered and rejected, and the failure modes we know about.

Author: Ralph Lazar, editor of Canopy. MSc Economics, LSE. Former Goldman Sachs (Global Equity Strategy) and Credit Suisse First Boston (Fixed-Income Proprietary Trading). Corrections: corrections@canopy.africa

Thesis. A funder allocating capital into African conservation today has to synthesise eight dimensions of political risk, nine dimensions of protected-area performance, a national carbon-market framework, and the density of ongoing deployment activity, to make a single decision: does this country clear the bar? Every serious allocator performs this synthesis. Most do it implicitly, inside their head, in ways that do not survive scrutiny when the next analyst has to rebuild the case. CCS does it explicitly.

CCS is not a number we believe more than the practitioners who use it. It is a number we publish so the practitioners who use it can disagree with it productively, recompute it under different weights, and hold us to account when the ranking shifts. The formula is simple on purpose. Simple composites rank better than elaborate ones. Known-direction signals beat hidden black boxes.

What CCS is for. Country screening, allocation-weighting, and pipeline triage. What CCS is not for. Project-level underwriting, operator selection, or any decision that turns on within-country variance. For that, use PACE, Funders, Who's Who, and the intelligence feed directly.

Section 1

The formula

CCS is a weighted average of four component scores, each on a 0-5 scale, renormalised to 0-100. The formula in full:

CCS = 20 × ( 0.40 × RISK + 0.25 × PACE + 0.15 × POLICY + 0.20 × DEPLOYMENT ) # All four components are 0.00-5.00. Output is 0.0-100.0. # A country scoring 5 on every component has CCS = 100.0. # A country scoring 0 on every component has CCS = 0.0. # Continental median CCS is approximately 50.

Weights sum to 1.00 by construction. They are not optimised, they are chosen. Their justification sits with each component below. We published this formula with the intent that a reader who disagrees with the weights can recompute CCS in a spreadsheet in five minutes with the published component scores on each country page.

Section 2

Components

Four components. Each sourced from existing Canopy proprietary scoring or from public data with a documented extraction rule. No component weight is arbitrary; the rationale is stated against each.

RISK
40% weight
0.00-5.00
Source
Canopy RISK composite score, published methodology. Eight dimensions covering governance, conflict, donor posture, PA legal durability, tenure, operator-government relations, capital flow freedom, and market policy. All 54 countries scored.
Scoring rule
Direct use of composite_score. Already on the required 0-5 scale.
Null handling
Not applicable. All 54 African countries are scored on RISK. A pending RISK score disqualifies the country from CCS.
Weight rationale
Political risk eats conservation-capital returns first and compounds fastest. Multi-decade horizons magnify governance and conflict exposure in ways short-horizon asset classes avoid. Empirically, the dispersion of RISK composite scores across the continent (1.7 to 4.3 on the current cohort) carries more signal than any other single component. Under-weighting RISK is the most common failure mode in consultant composites. We over-weight it on purpose.
PACE
25% weight
0.00-5.00
Source
Canopy PACE (Protected Area Conservation Effectiveness) composite scores, published methodology. Nine dimensions covering biodiversity, ecosystem services, security, ecotourism, community development, leadership, budget, and fundraising.
Scoring rule
Simple arithmetic mean of composite_score across PAs rated in the country. No operator weighting, no area weighting.
Null handling
Countries with zero rated PAs are assigned the continental median PACE computed across the 38 currently-rated PAs (median = 3.15 on the current cohort). This is a conservative imputation that neither penalises nor rewards non-coverage. Countries with pending PACE expansion inherit this median until their first PA lands in the published cohort. This choice is defensible but not costless: it hides the fact that no PA has been professionally scored, which may matter to the reader. The coverage pip on every country page signals this.
Weight rationale
A country is a safe place to deploy conservation capital only if capital can land on a well-run protected area. PACE measures that landing-site quality directly. The 25% weight positions PACE as second only to RISK because place-level performance compounds: a weak PA becomes a weaker PA on a ten-year horizon; a strong PA survives its founders. We considered weighting PACE equal to RISK and reject it: within-country PA variance is higher than within-country RISK variance, so the country-level PACE mean is a less reliable signal than the country-level RISK composite.
POLICY
15% weight
0.00-5.00
Source
Canopy Policy Tracker, Article 6.2 status. Single categorical field per country.
Scoring rule
Ordinal mapping of Article 6 status:
operational = 5.0 · agreement-signed = 4.0 · in-negotiation = 3.0 · framework-only = 2.0 · no-framework = 1.0
Null handling
Missing policy entry is treated as no-framework (1.0). The policy tracker covers all 54 African countries so missing entries should not occur.
Weight rationale
Article 6.2 readiness unlocks specific instrument sets (ITMO generation, corresponding adjustments, compliance-market access) that materially change the capital-structure options available to a conservation project. A country without a framework is not uninvestable, but the instrument palette narrows. We considered weighting POLICY at 20% and reject it: policy status is a fast-moving signal (a framework can be stood up in a budget cycle) and over-weighting it makes CCS whippy across refresh cycles. 15% captures the real signal without amplifying noise.
DEPLOYMENT
20% weight
0.00-5.00
Source
Canopy Projects directory (project count in country) and RISK donor-environment dimension (already tier-1, data-driven).
Scoring rule
Average of two equally-weighted sub-scores:
Projects sub-score = min(5, 2.5 × log10(n + 1)), where n = count of tracked projects. This yields 0.0 for zero projects, 0.75 for 1, 2.5 for 10, 3.7 for 30, 5.0 at 100+. Log-scaling prevents Kenya (36 projects) from dominating Comoros (1 project) on this axis by more than the signal warrants.
Donor sub-score = RISK donor_env dimension score, direct use. Already 0-5.
Null handling
Countries with zero tracked projects get a 0.0 on projects sub-score, full weight. This is penalising by design: absence of deployment activity is itself a signal.
Weight rationale
RISK, PACE and POLICY are forward-looking. DEPLOYMENT is the market's revealed preference: where capital is already flowing, despite RISK, despite POLICY gaps. This carries information that the forward-looking components miss. We weight DEPLOYMENT above POLICY because the market's decisions are louder than regulators'. We weight it below PACE because quantity is not quality.
Section 3

Bands, tiers, and citation forms

The CCS 0-100 number is the audit form. The four-tier band is the citation form. Either can be used in isolation; both are always published together.

TierCCS rangeLabelInterpretation
Tier 1 65.0 and above Deployment-ready The country clears a high bar on the forward-looking components and shows active deployment. Standard due diligence suffices. Expected top ~15% of 54 countries.
Tier 2 50.0-64.9 Deployable with mitigants One or more components materially below continental median, but other components compensate. Specific mitigants (structure, insurance, co-investor profile) required. Expected ~25% of 54.
Tier 3 35.0-49.9 Selective deployment Deployment defensible only for specific strategies (community-conservancy focus, blended-finance structures, donor-led vehicles). Not a passive-allocation destination. Expected ~35% of 54.
Tier 4 Below 35.0 Monitoring-only Political, legal, or capital-flow environment unsuitable for long-horizon conservation capital deployment under current conditions. Monitor for inflection. Expected ~25% of 54.

The tier thresholds are absolute, not percentile-based. A country's tier does not change because other countries' CCS changed. This is deliberate. A percentile-based system would mean a country could drop a tier because another country improved, which is nonsensical for an allocation decision. We report continental percentile alongside the absolute tier for context only.

Citation forms

For press, pitch decks, and third-party reference, use any of:

Canopy requests attribution on any public reproduction of CCS or its components. See the corrections policy.

Section 4

Edge cases and null handling

Composite scores live and die by how they handle missing data. Every CCS component has an explicit null rule, published above and restated here for the reader who wants to know the failure modes before trusting the number.

Missing PACE ratings

Continental median imputation. Chosen over zero-fill (which would penalise low-coverage countries unfairly) and over exclusion from CCS (which would shrink the universe). The continental median is recomputed quarterly from the published PACE cohort. Imputed countries are flagged with a coverage pip indicator on every page.

Countries with few tracked projects

Log-scaling prevents sample-size distortion. A country with 1 tracked project scores meaningfully above zero (reflecting the presence of deployment signal) without claiming parity with a country that has 20. Countries with zero tracked projects score zero on the projects sub-score; this is penalising-by-design, not an accident.

Small island states and data-limited jurisdictions

Seychelles, Sao Tome and Principe, Comoros, Cabo Verde, and Mauritius have limited PA cohorts, limited project pipelines, and small absolute capital-deployment footprints. Their CCS scores should be read with the coverage pip prominently in mind. A Tier 1 Seychelles and a Tier 1 South Africa are both defensible but for different reasons.

Active conflict zones

Somalia, South Sudan, Libya, Sudan. RISK composite already captures the relevant information at 40% weight. CCS does not apply a separate conflict override. We considered a hard floor (any country with RISK composite below 2.0 forced to Tier 4) and reject it as redundant: the weighted formula already produces Tier 4 for these countries without a hand-tuned cutoff.

Refresh cadence

Components refresh asynchronously. RISK is scored on human-review cycles (roughly quarterly), PACE on PA-addition cycles (monthly to quarterly), POLICY on tracker updates (rolling), DEPLOYMENT on projects-directory updates (rolling). CCS is recomputed on every component update and time-stamped. The number on any dossier is the CCS as of the last component refresh for that country, not a static quarterly number.

Section 5

Formulations considered and rejected

A methodology page that shows only the chosen formulation invites the reader to imagine that no alternatives were considered. Here are the ones we considered and why they lost.

Rejected: Equal weights across four components (25% each)

The equal-weighting prior is tempting because it avoids the appearance of editorial judgement. But equal weights is itself an editorial judgement, and a weaker one. RISK carries more signal than POLICY on a ten-year horizon; pretending otherwise produces a rank order that practitioners would override by hand, which is worse than an opinionated index they can disagree with explicitly.

Rejected: Percentile-based bands (top 15%, next 25%, etc.)

Percentile bands mean a country can drop a tier because another country improved. For allocation decisions this is nonsensical. Absolute thresholds preserve the meaning of the tier across time. We report percentile as supplementary context only.

Rejected: Geometric mean of components

Geometric mean has an attractive property: any component scoring zero forces CCS to zero. This overweights single-component failures in a way that would produce unhelpful rankings (a country with deep RISK strength and a pending policy framework would be penalised disproportionately). Arithmetic mean is the right choice for a composite where components are complements, not gates.

Rejected: Adding a "size" component (GDP, land area, population)

CCS should answer "how deployable is this country" not "how big is the addressable opportunity". Size is a separate question and should drive allocation weighting downstream of CCS, not be bundled into it. A small well-governed country is not a worse destination per dollar than a large poorly-governed one.

Rejected: Separate sovereign-credit-rating input

S&P/Moody's/Fitch sovereign ratings correlate heavily with RISK tier-1 dimensions (governance, donor environment), so adding them would double-count. We considered using them as a cross-check but ultimately they do not add information to the composite.

Rejected: Instrumenting on forward-looking operator pipeline

We considered adding a component tracking announced-but-unclosed operator expansions (African Parks next-PA commitments, new concession awards). Rejected on grounds that announcement-based data is noisy and reward-hackable. If a pipeline matures it will show up in DEPLOYMENT within 12-18 months; until then it shouldn't move the score.

Section 6

Known limitations and failure modes

Any composite lies in some direction. Here are the directions we know CCS lies in, published so practitioners can adjust.

1. PACE coverage skew toward African Parks operator model
As of the current PACE cohort, 44% of rated PAs are African Parks-managed. Countries where African Parks has a large footprint (Malawi, Zambia, Central African Republic, Chad, Mozambique) have PACE means that over-represent one operator's performance. Countries with community-conservancy-dominant models (Namibia NACSO, Kenya KWCA, Tanzania WMAs) are under-represented. We are actively rebalancing PACE coverage; re-check CCS after each major PACE cohort expansion.
2. Projects directory has coverage gaps in Francophone West Africa
Canopy's projects directory is assembled from Verra, Gold Standard, and other registries whose English-language reporting is denser than Francophone equivalents. Benin, Burkina Faso, Niger, Mali, Cote dIvoire, Senegal likely under-count. DEPLOYMENT scores for these countries are biased downward. CCS scores for Francophone West Africa should be read as lower bounds.
3. RISK tier-2 editorial dimensions have an LLM-drafter footprint
Five of RISK's eight dimensions are editorially-assisted (LLM drafts with human review). LLM drafting has systematic biases: it tends to hedge where data is thin and to weight recent press more heavily than long-arc structural factors. Human review corrects the worst of this but not all. A country whose recent news cycle was unusually negative will have a RISK composite slightly depressed; the inverse for positive news. CCS inherits this. Quarterly re-review smooths it over time.
4. POLICY is binary-ish; CCS changes stepwise on framework signings
When a country moves from in-negotiation to agreement-signed, its POLICY score jumps from 3.0 to 4.0, adding 3 points to CCS in one step. This is correct (the event is discrete) but produces visible score jumps on announcement days. Readers tracking CCS time-series should expect stepwise movement on the POLICY axis, not smooth drift.
5. CCS does not capture within-country variance
Kenya the country has a single CCS. Kenya's Samburu, Kenya's Laikipia, and Kenya's coastal Lamu are three different investment contexts. CCS is wrong for any decision that turns on within-country geographic selection. For that the PACE, Keystones, and intelligence layers must be consulted directly. CCS is a starting filter, not a decision.
6. Small-sample countries carry wider uncertainty bands we do not publish
A country with 1 rated PA, 1 tracked project, and zero Who's Who profiles has a CCS whose standard error is meaningfully higher than a country with 5 PAs and 30 projects. We do not currently publish uncertainty bands on CCS because the inputs are heterogeneous (some data, some editorial) and a clean error model is not straightforward. The coverage pip is the current proxy; a full uncertainty-quantified CCS is on the roadmap.
Operating principle. If CCS disagrees with your priors on a country you know well, CCS is more likely to be wrong than your priors are. The value of CCS is not in the countries where you have strong priors; it is in the 40 or so countries where you do not.
Section 7

How CCS compares to peer indices

CCS is not the first index that ranks African countries. It is the first purpose-built for conservation capital. The relevant peers and what they miss:

World Bank Worldwide Governance Indicators (WGI)
Six dimensions covering governance, rule of law, corruption, regulatory quality, political stability, and voice. Annual publication, data-driven, respected.
What WGI misses: protected-area management quality, conservation-capital market maturity, operator track records, Article 6 readiness. WGI is a governance index, not a conservation-finance index. CCS uses WGI as an input to RISK, not as a substitute.
World Bank Doing Business (discontinued 2021)
Ten-pillar composite on ease of starting and operating a business. Historically cited for African allocation decisions.
What Doing Business missed: conservation-specific friction (PA concession regimes, community-tenure law, wildlife economy regulation). Discontinued in 2021 after data-integrity concerns. Canopy does not use Doing Business inputs.
EIU Country Risk Service
Quarterly composite on sovereign, political, and economic risk. Subscription-only. High quality, widely cited on trading desks.
What EIU misses: conservation-sector specifics. EIU's country risk scores are built for sovereign-bond allocators and corporate treasurers, not for long-horizon illiquid conservation capital. A country can be high EIU risk and low CCS risk if its conservation-specific institutions (PA law, community tenure, operator ecosystem) are strong.
Transparency International Corruption Perceptions Index
Annual country ranking on perceived corruption. Input to RISK governance dimension.
What CPI misses: everything that is not perceived corruption. Narrow by design.

CCS is designed to be the reference index a conservation-finance practitioner reaches for first. When CCS disagrees with WGI, EIU, or CPI, the disagreement is itself signal: those indices were built for different decisions.

Section 8

Revision policy

How CCS changes over time, and under what circumstances.

Component-level refresh (automatic)

CCS recomputes on every refresh of any underlying component. No manual trigger. Each country page shows the last-updated timestamp for its CCS, accurate to the day.

Methodology-level revision (deliberate, rare)

Weights, band thresholds, and null-handling rules are treated as version-controlled. Any change to these is published with (a) the old value, (b) the new value, (c) the reason for the change, (d) the effective date. Historical CCS under old methodologies is preserved on request.

The current version is CCS v1.0, effective 23 April 2026. Material revisions will bump the version number (v1.1, v2.0, etc).

Backward compatibility

Past CCS values are not re-stated under new methodologies. A country's "CCS movement" time-series will always compare like-for-like within a single methodology version. Cross-version comparisons are disclosed explicitly.

Section 9

Editorial review and corrections

Every CCS component has a source trail. Every source-trail entry is reviewable. Every review is open to challenge.

Human review. RISK and PACE composites are LLM-drafted and human-reviewed before publication. POLICY is editorial input based on public tracker data. DEPLOYMENT is derived from Canopy's projects directory, which is human-curated from public registries. CCS is a pure composition step; the composition itself is deterministic.

Source trail. Every RISK dimension note carries source URLs. Every PACE dimension note carries source URLs. The POLICY tracker is editorial. The projects directory cites registry IDs and issuer pages. A reader disputing a CCS value can trace it back to the underlying evidence in 2-3 clicks.

Corrections. Errors should be reported to corrections@canopy.africa. Fourteen-day response SLA. Verified corrections are merged into the next CCS refresh with public note of the change.

Disputes. Any country or operator disputing its CCS, any component score, or any source citation can request a formal review. Canopy will publish the dispute, the review process, the decision, and (if warranted) the correction. Canopy does not suppress disputes.