You’re probably in a familiar position. A stakeholder asks whether the market is accelerating, slowing, or shifting between segments. At the same time, a rival changes pricing, adds roles in a new region, ships a feature set, or rewrites its homepage. The hard part isn’t spotting movement. It’s quantifying it in a way that survives scrutiny.
That’s where the market growth formula still matters. Not as a boardroom-only metric, and not as a lagging market-research exercise, but as a working model for competitor intelligence. Used properly, it helps PMMs and CI teams connect macro market expansion to micro competitor movement. Used badly, it turns a few noisy alerts into overconfident stories.
A defensible view starts with the maths, then moves quickly into evidence quality. If your inputs are weak, your growth narrative will be weak too.
Table of Contents
- What the market growth formula actually does
- Why PMMs should apply it below market level
- How to calculate growth from verified competitor signals
- What counts as a good input and what does not
- Examples that turn the formula into action
- Common errors that break the case for action
- How to present the number to leadership
- Where the market growth formula helps and where it does not
- FAQ
What the market growth formula actually does
A PMM usually sees the problem in a weekly review. One competitor has added enterprise pricing. Another is hiring implementation managers in regulated markets. A third has shipped three workflow releases in a quarter. The question is not whether those moves look busy. The question is whether they add up to measurable growth.
The market growth formula gives you a way to quantify that movement between two points in time. In plain terms, it measures the percentage change in a defined object:
((Current Value – Past Value) / Past Value) × 100
For strategy teams, that object is often total market size. For competitive intelligence work, the more useful application is narrower. The object can be a rival's sales headcount in one region, the number of pricing packages on the site, the count of product integrations, or the volume of verified customer logos in a target segment. The math stays the same. The discipline comes from defining the object tightly enough that the comparison is fair.
The basic formula
The standard formula answers a simple operator question. How much did one verified thing change over one measured period?
If a competitor's public headcount in a named function rises from 150 to 225 over 12 months, the growth rate is:
((225 – 150) / 150) × 100 = 50%
That 50% does not prove revenue growth. It does prove material expansion in the thing you chose to track. In practice, that is often enough to justify a closer review of pricing, segment focus, capacity build, or launch timing.
I usually apply one rule before I trust the number. Define the object, the source, and the time window before touching the calculator.
Without that discipline, the formula produces clean-looking noise. Analysts get into trouble when they compare total global headcount in one snapshot with regional hiring data in another, or when they treat a website update as if it reflects full commercial rollout.
When to use CAGR instead
The standard growth formula works well for single-period changes. It gets less useful when you need to describe the pace of change across several years and smooth out uneven jumps between periods. That is the job of CAGR, or compound annual growth rate.
The formula is:
((Ending Value / Beginning Value)^(1 / Number of Years) – 1) × 100
Use CAGR when the decision depends on trend direction over a longer horizon, such as a category buildout, a sustained hiring program, or a multi-year expansion into an adjacent segment. Use the simple growth rate when you are measuring a discrete change and need to know whether a competitor has moved enough to trigger action.
The practical trade-off is straightforward. Standard growth is better for short-interval signal reading. CAGR is better for strategic pacing. If you use CAGR on noisy quarterly competitor signals, you can smooth away the exact surge leadership needs to see. If you use a one-period growth rate for a multi-year story, you can overstate a temporary spike and build the wrong case.
Why PMMs should apply it below market level
A PMM gets pulled into the weekly forecast call because a competitor has posted new security roles, changed packaging on the pricing page, and published three release notes tied to admin controls in the same month. The category report on leadership’s slide still says the market is growing, but that does not answer the real question. Is this rival making a routine update, or building toward an enterprise push that will change win rates next quarter?
That is where the market growth formula becomes useful for PMM work. Applied below market level, it helps quantify whether a competitor’s movement is small, steady, or accelerating across a specific area you can verify.
Top-down market numbers are still useful. They set context, support budgeting, and help explain why a category is attracting investment. But they are usually late for competitive response. PMMs need earlier evidence, and earlier evidence usually shows up in small public changes first: more jobs in a named function, new pricing language, added implementation services, fresh compliance pages, or a burst of feature documentation.
The practical advantage is speed with discipline.
Instead of waiting for a market report to confirm a shift, PMMs can measure competitor motion inside a narrower object and time window. That makes the formula operational. A 40% increase in enterprise sales roles over two quarters does not prove revenue growth. It does support a defensible claim that the company is investing in enterprise coverage. A rise in feature pages tied to governance or audit trails does not prove full adoption. It does support the case that the product is being shaped for stricter buying environments.
That distinction matters. PMMs do not need perfect visibility into a competitor’s books. They need enough verified movement to help sales, product, and leadership decide whether to respond.
A below-market view also handles a common trade-off better than category research does. Broad market growth can stay stable while one competitor changes course fast. I have seen teams miss that pattern because they kept quoting category expansion while a rival was rebuilding pricing and proof for a different segment. By the time revenue impact showed up in the field, the public signals had been visible for months.
Use the formula here as a test of momentum, not as a substitute for strategy. If several verified signals in the same area are growing over the same period, PMMs can build a stronger case that the movement is coordinated rather than random.
A practical framing works well:
- Start with the competitor move: define one measurable change, such as hiring in a function, pricing structure, or feature documentation.
- Quantify the rate of change: calculate growth over a fixed interval using the same source type.
- Interpret the commercial meaning: explain what that pattern could support, such as segment expansion, deal-size growth, or procurement readiness.
- State the confidence level: separate confirmed facts from reasoned inference.
That last step is where good PMM work stands apart from commentary. The number supports the case. The case still depends on judgment, source quality, and restraint.
How to calculate growth from verified competitor signals
The formula itself is simple. The difficult part is deciding what belongs in the numerator and denominator.
Use one measurable object at a time
CI teams get into trouble when they mix different objects into one growth story. Pricing page edits, jobs growth, and feature launches may all point in the same direction, but they shouldn’t be forced into a single percentage.
Treat each input as its own trackable series. Examples include:
- Hiring movement: count roles in a defined team, region, or function from verified public careers pages.
- Packaging movement: track changes in tier names, plan structure, seat language, or procurement language across archived pricing pages.
- Feature movement: count net-new feature pages, release-note entries, or documentation sections tied to a product area.
- Proof movement: track customer logos, case-study pages, compliance pages, or investor-facing claims that indicate segment focus.
Each series gets its own baseline. Each baseline gets its own formula.
A simple operator workflow
The workflow that works in practice is straightforward:
Choose the object
Pick one unit you can count consistently over time.Verify the source
Use a public source you can inspect later. Careers pages, pricing pages, product docs, regulatory filings, investor updates, and release notes are stronger than second-hand summaries.Set the time window
Compare like with like. Monthly to monthly, quarter to quarter, or year to year.Run the formula
Apply the standard market growth formula for single periods, or CAGR for multi-year periods.Attach the evidence chain
Keep screenshots, archived pages, source URLs, and change dates.Interpret only after verification
Don’t jump from “headcount grew” to “they’re winning the market” unless you have corroborating proof.
A lot of noisy tools reverse steps five and six. They produce a narrative first, then leave the operator to hunt for proof after the fact. That’s exactly backwards.
What counts as a good input and what does not
The quality of your output depends on the quality of the public movement you track. The formula doesn’t rescue bad inputs.
High-trust inputs
Good inputs share three traits. They are public, stable enough to inspect, and specific enough to support a decision.
| Input type | Why it works | Typical use |
|---|---|---|
| Careers pages | Roles are countable and dated by publication windows | Hiring direction, regional expansion, function build-out |
| Pricing pages | Packaging and copy changes are inspectable | Commercial model shifts, enterprise push, discount pressure |
| Product docs and release notes | Product movement is usually attributable to a specific area | Roadmap parity, launch velocity, segment readiness |
| Regulatory or investor filings | Higher trust, often more formal | Market entry, compliance posture, legal structure |
These sources won’t tell you everything. They will usually tell you enough to support a credible first assessment.
Weak inputs that create false certainty
Weak inputs often look useful because they’re easy to collect. They’re still weak.
- Unverified summaries: they may be directionally right, but you can’t defend them under review.
- Single alert spikes: one captured change rarely supports a strong growth claim on its own.
- Mixed-source counts: if one month uses a careers page and the next uses a third-party jobs board, the baseline is broken.
- Interpretation without artefact review: if nobody checked the actual page-level diff, confidence should stay low.
If a stakeholder can’t inspect the underlying movement, you don’t have a reliable growth claim. You have a story.
Examples that turn the formula into action
Monday morning, a PMM walks into pipeline review and gets the familiar question: “Are competitors gaining ground, or are we reacting to noise?” A usable answer starts with a small, verified object and a clear formula. It does not start with a broad market report that arrived three quarters late.
Headcount growth as a regional threat signal
Start with a count you can defend. If a competitor had 150 employees listed in London a year ago and now shows 225, the growth rate is:
Growth rate = ((225 – 150) / 150) × 100 = 50%
The math is simple. The judgment is harder.
A 50% increase in one city does not automatically mean revenue growth. It can mean a hiring catch-up, a new support hub, or a shift in legal entity structure. The useful move is to break the count by function and test whether the pattern matches a commercial push.
Ask four questions:
- Are the new roles concentrated in sales, solutions, and customer success, or mostly in back-office functions?
- Do the hiring dates line up with a pricing change, new enterprise language, or fresh customer proof?
- Is the build-out happening in a region where you compete for active deals?
- Does the pattern justify a response in messaging, coverage, or roadmap priority?
Newer PMMs frequently overstate the significance of raw headcount data. A headcount number on its own supports monitoring. A headcount number that matches pricing changes, feature releases, and regional GTM hiring supports action.
Category growth built from verified bottom-up inputs
The same formula works when the question is bigger than one competitor. Sometimes leadership needs a view of whether a category segment is expanding fast enough to justify investment. For that, use a bottom-up model with assumptions you can inspect.
A practical version looks like this:
Estimated segment size = target account count × expected purchase rate × average contract value
Then measure how that estimate changes over time with the standard growth formula.
This approach is more useful in CI and PMM work than a top-line TAM slide because each variable can be challenged separately. If someone disputes the number, the discussion gets specific. Are we overstating the number of viable buyers? Are we assuming too many purchases per year? Is the average price anchored to enterprise deals while most of the segment buys lower-tier plans?
That changes the quality of the decision. The team stops arguing about “market potential” in general terms and starts testing whether the segment you serve is growing.
CAGR for noisy competitor movement
Sometimes the monthly pattern is too uneven to brief cleanly. A competitor may release heavily in one quarter, pause in the next, then expand pricing and packaging later. In that case, CAGR helps normalize the period.
Use:
CAGR = (Ending value / Beginning value)^(1 / number of years) – 1
For example, if a verified competitor signal rises from 40 observed enterprise-relevant product updates to 90 over three years, the CAGR is:
CAGR = (90 / 40)^(1 / 3) – 1 ≈ 31%
That number is not a claim about revenue. It is a cleaner description of sustained movement in a specific object.
I use CAGR when leadership wants to compare strategic tracks on the same time horizon. One competitor may be expanding feature depth. Another may be expanding hiring in regulated-market roles. A third may be increasing enterprise packaging changes. CAGR gives you a common frame, but only if each input is measured consistently across the whole period.
Turning the number into a decision
The point of these examples is not the arithmetic. It is the action threshold.
If pricing-page changes are up 25%, enterprise-role hiring is up 40%, and release-note volume in security features is up over the same period, you have a defensible case that the competitor is committing to larger, more regulated accounts. That can justify three concrete moves: update competitive battlecards for enterprise deals, stress-test packaging gaps, and tighten proof for the accounts most likely to face that pressure.
If only one signal moved, keep the brief narrow. Call it directional. Recommend monitoring, not a company-wide response.
Good operator work comes from matching the confidence of the recommendation to the quality of the inputs.
Common errors that break the case for action
A PMM walks into a forecast review with a clean growth number based on competitor hiring. The math is right. The recommendation still gets rejected because the underlying object is weak, the baseline shifted, and the brief subtly blended observed facts with future estimates.
That is the primary failure mode. Bad arithmetic is easy to catch. Bad case construction is what slips through.
Confusing visible activity with decision-relevant growth
Competitors create noise all the time. More blog posts, more job listings, more release notes, more pricing-page edits. None of that matters unless the movement changes a decision you need to make.
I use a simple screen before I calculate anything. Ask whether the signal changes one of these:
- Buyer choice
- Deal risk
- Roadmap pressure
If the answer is no, do not build a strategic recommendation on it.
This matters most at the micro-signal level. A spike in hiring can mean expansion, backfill, or reorg cleanup. A burst of feature releases can mean real product investment, or it can mean a team finally documented work that shipped months earlier. The formula only measures change in the object you chose. It does not prove commercial impact.
Mixing baselines that are not the same object
This is one of the fastest ways to lose credibility with leadership.
Teams often compare a current enterprise pricing page to an older general pricing page, a global hiring count to a regional hiring count, or security release notes this quarter to all product documentation last quarter. The math will still produce a percentage. That percentage is useless.
Use a consistent object across the full period. Same source surface. Same inclusion rule. Same scope.
If the competitor changed the page structure, product taxonomy, or job-site setup halfway through the window, reset the baseline. I would rather defend a shorter, cleaner time series than a longer one built on unlike inputs.
Treating forecasts as evidence
Projected growth has a place in planning. It does not belong in the same evidence bucket as verified competitor movement.
As noted earlier in the article, broad market-growth examples often combine historical values with projected ones to show how CAGR works. That is fine for teaching the formula. It is not fine for a competitor brief unless you label the forecast clearly and keep it separate from observed change.
A simple rule helps here. Public pricing changes, headcount trends from verified role postings, release-note volume, packaging edits, and case-study additions belong in the evidence section. Analyst forecasts, TAM models, and management guidance belong in the assumptions or outlook section.
Once those categories get mixed, the recommendation usually becomes overstated.
Forcing precision from thin or biased samples
A clean percentage can hide a weak sample.
If you observed two pricing changes and one new compliance page, do not present a growth rate as if you measured a stable trend. The same problem shows up when a team tracks only one competitor surface because it is easy to monitor, then generalizes the result to the whole company motion.
Small samples are still useful. They just support smaller claims.
Call the signal directional when coverage is narrow. Escalate only when multiple verified surfaces point in the same direction. In practice, I trust the number much more when pricing, hiring, and product evidence all move together than when one metric jumps in isolation.
Ignoring timing and causality
Growth rates flatten sequence. Strategy does not.
A competitor can show strong annual growth in enterprise-role hiring while cutting enterprise packaging experiments in the same period. If you collapse everything into one window, you can miss the order of operations that explains the move. Hiring may be the setup. Packaging may come later. Or the hiring push may have stalled.
Check the timeline before you write the recommendation. Month-level sequencing often tells you more than the summary rate.
Writing a stronger conclusion than the evidence supports
This is the error that breaks trust fastest. Analysts see three correlated signals and jump to revenue conclusions, segment dominance, or buyer preference claims that the inputs cannot support.
Keep the claim tied to the measured object. Say the competitor increased security-related release activity. Say enterprise hiring rose across named functions. Say pricing-page complexity increased. Do not convert those observations into market-share claims unless you have direct evidence for that step.
A defensible brief does not sound dramatic. It sounds specific, bounded, and usable.
How to present the number to leadership
Most executives don’t need more formulas. They need a claim they can trust, a source they can inspect, and a recommendation that matches the confidence level of the evidence.
The minimum credible brief
A reliable growth brief is usually short. It should include:
| Element | What leadership needs |
|---|---|
| Measured object | What exactly changed |
| Baseline and current value | The two numbers used in the formula |
| Time period | The comparison window |
| Source surface | Where the evidence came from |
| Confidence note | Verified movement versus interpretation |
| Recommended action | Monitor, test response, or escalate |
This structure works because it separates fact, maths, and meaning. Too many updates mix all three and end up sounding stronger than they are.
A useful review format
When I review CI work with PMM teams, the strongest briefs usually follow this sequence:
Observed movement
State the public change first.Growth calculation
Show the formula and numbers plainly.Corroborating signals
Add related pricing, hiring, proof, or product evidence.Decision relevance
Explain which team should care and why.Confidence boundary
State what the evidence supports, and what it doesn’t.
That last point matters more than often realized. Credibility often comes from stating the limit of the claim, not from trying to sound definitive.
Where the market growth formula helps and where it does not
A PMM sees a competitor add an enterprise security page, post five solutions consultant roles, and raise list price within six weeks. The immediate question is not whether the whole market is growing. It is whether those verified moves add up to a pattern worth acting on.
The market growth formula helps because it gives that pattern a measurable shape. Applied to micro-signals, it turns scattered observations into a rate of change you can compare across rivals, time periods, and functions. That is useful when category reports are too broad or too late to guide a pricing response, a sales enablement update, or a launch adjustment.
Where it helps
The formula is strongest in situations where the tracked object is stable and public.
- Comparing competitor movement over time. A rise in open roles, partner pages, plan tiers, or documented integrations becomes easier to evaluate once converted into percentage change.
- Separating meaningful change from noise. Three new features may sound important. A 30% increase in released modules over one quarter is easier to benchmark against prior periods.
- Testing whether multiple signals point in the same direction. If hiring, packaging, and customer proof all rise in the same segment, the growth rate helps quantify that concentration.
This is especially useful below market level. Broad market numbers can frame the environment, as noted earlier, but they rarely tell you which competitor is accelerating, where, and how fast. Public competitor signals often do.
Where it does not help
The formula does not establish causation. It does not prove that a pricing change worked, that a hiring push translated into shipped product, or that a new segment bet will hold for the next two quarters.
It also breaks down when the underlying object changes shape.
A raw growth rate on pricing is weak if the competitor changed packaging, usage limits, contract terms, and discount policy at the same time. A growth rate on hiring is weak if half the roles are reposted placeholders. A growth rate on feature count is weak if one release is a minor UI adjustment and another is a major platform capability. The math can still be correct. The conclusion can still be wrong.
The practical boundary
I use the formula to answer one narrow question. How much confirmed movement happened in a defined period?
I do not use it alone to answer bigger questions about strategy quality, revenue impact, or buyer response. Those require a fuller evidence chain, usually including sales feedback, win-loss themes, customer interviews, and follow-up verification that the public signal led to a real operating change.
Used this way, the market growth formula is a decision support tool. It helps PMMs quantify competitor momentum with more discipline. It does not remove the need to judge signal quality, context, or likely business impact.
FAQ
Is the market growth formula only for large market reports
No. It works at any level where the tracked object is defined consistently. In CI, that often means a narrow public signal such as team headcount, number of pricing tiers, or a count of documented product modules.
What’s the difference between market growth formula and CAGR
The standard market growth formula measures change between two points. CAGR smooths multi-year growth into an annualised rate. Use the first for short operational windows and the second for strategic comparisons across longer periods.
Can I use the formula on competitor pricing changes
Yes, but only if you’re comparing like with like. If the plan structure changed completely, a raw percentage may mislead. In those cases, track structural change separately from price change.
What makes a growth claim defensible in CI
A clear baseline, a stable public source, an inspectable artefact, and a confidence boundary. If any of those are missing, the number may still be interesting, but it isn’t yet briefing-grade.
Should PMMs rely on projections when using the market growth formula
You can use projections for planning, but they should be labelled as projections. Don’t present forecast numbers as confirmed market outcomes.
The practical use of the market growth formula isn’t to make competitor tracking look more analytical. It’s to make your decisions more defensible. Start with one verified public movement. Define the object cleanly. Calculate the change accurately. Then attach interpretation only after the evidence holds up.
If your team wants a proof-first way to track public competitor movement with verified signals, inspectable evidence chains, and confidence-gated outputs, review Metrivant’s verified competitor signals methodology.

Leave a Reply