App advertisers have had a harder time with consent-mode measurement than their web counterparts, partly because the tooling has been less visible and partly because the symptoms — stable install volume, falling downstream quality — are easy to misdiagnose as a creative or targeting problem. App Consent Insights is Google's attempt to make the consent-measurement gap visible at the account level.

Used correctly, it is one of the more actionable diagnostics in the Google Ads interface for app campaigns. Used incorrectly, it generates anxiety without direction. This guide covers both what the diagnostic shows and what to do when the rating is not Excellent.

What changed in Google Ads

Google Ads has added a consent quality layer specifically for app campaigns that surfaces aggregate consent coverage data in a format tied to campaign and account-level performance. Previously, app advertisers had to infer consent quality from modeled conversion volume, SDK error logs, or platform-level discrepancies. The new view brings a direct rating into the campaign diagnostics area.

What the diagnostic appears to show

  • An overall consent quality rating for app campaigns (Excellent / Good / Poor)
  • The proportion of eligible traffic producing observed versus modeled conversions
  • Breakdowns by platform where available (Android vs iOS behavior can differ significantly)
  • Signals about where consent is being denied, not collected, or improperly passed
  • Recommendations linked to specific SDK configuration or CMP setup steps

Why this matters for bidding and measurement

The conversion funnel for an app campaign runs: eligible traffic → consented traffic → measurable conversions → bidding signal quality. Every leak in that funnel reduces the quality of the signal Smart Bidding uses to calibrate bids.

Eligible app traffic — all users your campaign could reachConsented traffic — users who have agreed to measurementMeasurable conversions — observed events from consented usersBidding signal quality — what Smart Bidding actually optimizes from

When consent coverage is low, step three shrinks and step four becomes increasingly model-dependent. Google's conversion models are reasonable but they are not a substitute for observed data, particularly in low-volume accounts or those targeting niche audience segments where model generalization is weaker.

The 7-step troubleshooting workflow

  1. Open App Consent Insights in Google Ads and record the current consent quality rating
  2. Check whether the rating differs between Android and iOS — if so, investigate platform-specific SDK configuration first
  3. Verify that the Google Mobile Ads SDK is passing consent state correctly on app launch before any measurement events fire
  4. Audit your in-app consent prompt: does it appear at an appropriate point in the user journey, or is it being skipped or bypassed?
  5. Check whether consent denials are geographically concentrated — regional privacy regulation differences often explain platform-level consent gaps
  6. Compare modeled versus observed conversion volume in the conversion action report — if modeled exceeds 30%, treat downstream performance data with extra caution
  7. After any SDK or CMP fix, allow two to four weeks before re-evaluating the consent quality rating and bidding performance

Common failure patterns

SymptomLikely causePriority fix
Consent rating Poor on iOS, Good on AndroidATT framework implementation incomplete or prompt timing wrongReview App Tracking Transparency prompt placement and timing
Stable installs, declining qualified leadsConsent denials hiding downstream intent signals from Smart BiddingAudit consent flow; check modeled vs observed split in conversion report
Consent rating Poor in specific regions onlyRegional CMP behavior or regional privacy regulationCheck regional consent prompt configuration in CMP settings
Rating Good but CPA rising unexpectedlyModeled conversions are inflating totals while quality slipsEnable offline conversion imports for downstream quality signals
No consent rating visible in accountSDK not passing consent state or app campaign type not yet supportedVerify SDK version and consent API implementation

Lead gen app example

A financial services company runs app campaigns for a mortgage quote tool. Install volume is stable at around 4,000 per month. Qualified quote requests from within the app, however, have dropped 28 percent over three months while CPL appears flat in Google Ads reporting. The App Consent Insights diagnostic shows a Poor rating, with iOS consent coverage significantly below Android.

Investigation reveals the ATT prompt is appearing too late in the onboarding flow — after users have already completed the initial registration step — meaning many users never see the prompt. After moving the prompt earlier and improving the explanation of why measurement is requested, iOS consent coverage improves over six weeks. The Smart Bidding model recalibrates toward higher-intent users and qualified quote volume recovers without increasing spend.

eCommerce app example

A retail app with strong purchase volume notices inconsistent ROAS reporting between Android and iOS campaigns. The App Consent Insights view shows Good coverage on Android but Poor on iOS, aligning with the ROAS instability. On iOS, a high proportion of purchases are being modeled rather than observed.

The team cannot fully resolve iOS consent given their user base, but they add an offline purchase import using hashed email matching to supplement the observed signal. This does not fix the consent rating but improves the downstream quality of what Smart Bidding can see, stabilizing ROAS performance enough to make budget scaling decisions with greater confidence.

What to monitor weekly

  • Consent quality rating: any change from the prior week
  • Modeled versus observed conversion split: track the ratio, not just total volume
  • CPA and ROAS trend by platform: diverging Android and iOS performance is often a consent signal
  • Downstream lead quality from CRM: are install-to-qualified rates holding even if raw volume looks stable?