What is a marketing-qualified account?
A marketing-qualified account, or MQA, is an account-level qualification stage that signals an account has shown enough collective engagement and fit signal to warrant a coordinated marketing-and-sales motion, even when no single individual at the account has yet become a marketing-qualified lead. It is the account-level analog of the classic MQL, designed for ABM, named-account, and committee-led B2B motions where the buying decision is collective and no single contact converts in isolation. MQA emerged as B2B teams realized that scoring leads one at a time misses the buying-committee reality of modern enterprise software purchasing.
See MQA scoring and routing in a 30-minute Abmatic AI demo.
The 30-second answer
A marketing-qualified account is an account whose combined activity (multiple known contacts engaging, anonymous traffic spiking, third-party intent surging, fit criteria matched) has crossed a defined threshold that triggers sales engagement. The MQA stage replaces or augments MQL because in modern B2B, two or three contacts at an account each visiting the pricing page is a far stronger buying signal than any one of them filling out a form, but a pure MQL model misses that signal entirely. MQA captures the collective committee signal in a single account-level state.
How MQA differs from MQL
MQL is person-level
An MQL is one person who has crossed an engagement threshold (visited X pages, downloaded Y content, scored above Z). The MQL goes to a sales rep for follow-up. The model assumes one-buyer-per-deal, which is the consumer or self-serve B2B reality but increasingly not the enterprise reality.
MQA is account-level
An MQA is one account that has shown enough collective signal across all contacts (known and anonymous) to warrant sales engagement. The account's combined behavior becomes the qualification, not any single person's behavior. Multiple contacts moving in the same direction is the signature signal that distinguishes MQA from MQL.
The math difference
An account with five contacts, each at thirty percent of MQL threshold, is invisible to an MQL model but a high-conviction MQA. That is the central reason MQA exists.
The handoff difference
An MQL handoff is a contact record routed to a rep. An MQA handoff is an account record routed to a rep with a brief showing every contact's activity, every anonymous session resolved to the account, every third-party intent signal, and a recommended next action. The rep gets a fuller picture and engages the buying committee, not a single person.
For the deeper definition, see marketing qualified account.
The signals that produce an MQA
Multi-contact engagement
Two or more known contacts at the same account engaging within a defined window. The window is typically two to four weeks. Multi-contact engagement is the strongest single MQA signal because it indicates committee-level interest.
High-intent page visits
Visits to bottom-funnel pages: pricing, demo, case studies, comparison pages. A single committee member on a pricing page is a weak signal; three committee members on the pricing page within ten days is a strong one.
Third-party intent surge
The account is showing surge on relevant topics in a third-party intent feed. Combined with first-party engagement, third-party surge increases conviction substantially. See intent data for the underlying mechanics.
Anonymous traffic from a known account
Reverse-IP-resolved anonymous sessions that match the account, even without a form fill. Multiple anonymous sessions from the same office IP are committee research happening below the surface.
Fit confirmation
The account matches the ICP on industry, headcount, technology, and geography. Without fit, even high engagement is not a productive sales conversation.
Trigger events
Funding round closed, new executive hired, expansion announced. Trigger events compound MQA conviction by adding timing to the engagement signal.
How to design an MQA model
Step 1: define the fit criteria
The non-negotiable ICP filters: industry, headcount band, geography, technology fit. Accounts that do not match are excluded from MQA scoring entirely. See how to build an ICP.
Step 2: assign weights to engagement signals
Different signals carry different weight. A pricing-page visit weighs more than a blog visit; a demo request weighs more than a webinar registration. Most teams start with a heuristic ten-to-fifteen-signal weighting and tune it quarterly against closed-won data.
Step 3: combine into an account score
The score sums the signals across all contacts at the account, weighted by recency. A modern MQA model decays signals on a thirty-to-sixty-day half-life so old activity stops contributing.
Step 4: set the threshold
The threshold is the score above which the account becomes an MQA. Setting the threshold is a pipeline-volume decision: a low threshold produces more MQAs and more sales work; a high threshold produces fewer MQAs with higher conviction. Most teams calibrate the threshold against sales capacity.
Step 5: define the routing
The MQA routes to a named rep based on territory, account ownership, or vertical specialization. The routing rules and the SLA (response time, first-touch within X hours) live in the routing layer.
Skip the manual work
Abmatic AI runs targets, sequences, ads, meetings, and attribution autonomously. One platform replaces 9 tools.
See the demo →Common pitfalls in MQA programs
Three patterns recur. The first is an over-engineered first model, where the team tries to ship a fifty-signal MQA model out of the gate; the model is uninspectable, the sales team does not trust the score, and the program stalls. The fix is to start with ten or fewer signals and add only when the data justifies. The second pitfall is a stale threshold, where the team sets the MQA threshold once and never recalibrates against sales capacity or pipeline conversion; over time, the MQA volume drifts, sales is either flooded or starved, and trust erodes. The fix is to recalibrate the threshold quarterly. The third pitfall is the MQA-MQL conflict, where the team runs both models in parallel and produces conflicting routing instructions ("this contact is an MQL but the account is not yet an MQA, what do we do?"). The fix is to define one canonical routing rule and document the precedence.
Who should care about MQA
Three buyer profiles see the strongest fit. ABM teams running named-account motions where engagement is collective by design. Enterprise sales-led organizations where the buying committee includes five to fifteen stakeholders and one-MQL-per-deal under-counts the real signal. Sales teams that routinely report stalling against committee dynamics that the existing MQL model fails to capture.
For broader context, see buying committee and account-based marketing.
How MQA fits with the broader stack
The MQA score lives in the customer data platform or the data warehouse, sources signal from the CRM, the marketing automation platform, the web analytics layer, and the third-party intent feed, and writes the resulting MQA stage back to the CRM as an account-level field. The sales engagement platform reads the MQA stage and surfaces the account in the rep's worklist. See customer data platform (CDP) and account graph for the underlying components.
Book a 30-minute Abmatic AI demo to see MQA scoring fused with first-party engagement, third-party intent, and committee-level routing.
FAQ
Does MQA replace MQL?
For most B2B teams, MQA augments rather than replaces MQL. MQL still has a role for self-serve and PLG motions where one buyer can complete a purchase. MQA is the additional account-level layer that captures committee signal. Many teams run both, with MQA taking precedence for ABM and named-account motions and MQL covering the rest.
How many MQAs should we expect per quarter?
It depends on TAM, target-account list size, and MQA threshold. According to recurring practitioner discussion in r/RevOps and on LinkedIn, mid-market ABM teams typically aim for thirty to a hundred MQAs per rep per quarter, with the threshold tuned to land in that range against the team's pipeline targets.
What signal is most predictive of MQA conversion to opportunity?
Multi-contact engagement at the account, especially when paired with third-party intent surge and fit confirmation. The combination of two-or-more known contacts engaging plus surge on a relevant topic plus ICP match tends to be the strongest single predictor in most models.
How do you measure an MQA program?
The headline metrics are MQA-to-opportunity conversion rate, opportunity-to-closed-won conversion rate, MQA volume per rep, and time-from-MQA-to-first-meeting. These metrics replace MQL-to-opportunity in account-led motions.
Can MQA work without an ABM platform?
Yes, but it requires significant manual stitching. ABM platforms (Demandbase, 6sense, Abmatic AI) automate the score-and-route layer; without one, the team has to build the model in the warehouse, push it to the CRM, and stitch routing rules together manually. The DIY approach works at small scale; the platform investment pays back as account volume grows.
The verdict
A marketing-qualified account is the account-level qualification stage that captures collective committee signal: multi-contact engagement, intent surge, anonymous traffic resolved to the account, fit confirmation, and trigger events, all combined into one score that crosses a defined threshold. MQA is the modern complement (or replacement) for MQL in ABM, named-account, and committee-led B2B motions because the buying decision is collective and one-MQL-per-deal under-counts real signal. Done well, MQA produces a higher-conviction handoff than MQL and shorter time-to-first-meeting. Done poorly (over-engineered, stale threshold, MQA-MQL conflict), it adds complexity without the conversion lift. The 2026 maturity move is a simple model, recalibrated quarterly, with one canonical routing rule.
For broader context, see lead scoring and how to set up account scoring. To see MQA fully operationalized, book a 30-minute Abmatic AI demo.
