Sunday, February 9, 2014

Building Lead Scoring Models in Marketo and Salesforce.com

Building Lead Scoring Models in Marketo and Salesforce.com
The theory, practice, and realities of lead scoring


In this blog post, I’ll share my experience building and using lead scoring models in Marketo and Salesforce.com.  I’ll share what worked and what didn’t work, as well as some insights on organizational challenges that can impact lead scoring models.  Ultimately, one of the primary goals of lead scoring is to get sales and marketing aligned with respect to lead quality.  Lead scoring helps marketing focus on lead quality over lead volume and it helps sales prioritize their follow up on leads.  Getting sales and marketing on the same page with a lead scoring model helps eliminate the classic argument that occurs when marketing says “look at all the leads that we delivered” and sales says “all the leads are junk.”  However, building that alignment and developing a lead scoring model that really works is challenging.


First, let’s start with some background.  I was at a start-up years ago - before lead scoring and lead nurturing were common marketing topics.  I did some analysis on the demographics and behavior of leads that converted into won opportunities.  The behavior data consisted of Salesforce.com campaign data and website page visits.  What I found was that certain sets of demographic values and behavior activity had high correlation with won opportunities.  This suggested to me that if I could acquire certain types of leads and then drive a series of marketing campaigns at them to engage them with the optimum behavior profile, then those leads would have a higher probability of converting into won opportunities.  The behavior activity was a model for what a lead needed to do to understand the value of our solution.  It was a powerful learning experience, seeing this set of demographic and behavior data associated with leads that turned in won opportunities.


Fast forward to today, I’m going to assume that most people reading this blog are familiar with lead scoring.  At a high level, you’re assigning scores to attributes related to individual leads (demographics), their company (firmographics), their behavior (e.g. campaign engagement, pages visited), and BANT attributes.  BANT covers any questions that capture information about Budget, Authority, Needs, or Timing.  Examples of BANT questions might be: do you have a budget, what is your role in evaluating or purchasing these solutions, what is your most critical challenge.  And each of the BANT questions would have a pick list of potential responses.


How do you get started building lead scoring models


Building lead scoring models is a joint exercise with the sales and marketing teams.  You go through a discussion where you define the attributes of an ideal lead using insights and data from both the sales and marketing teams.  You’ll probably come up with a big list and you’ll need to prioritize the questions to define which ones make it into the model.


We built separate demographic, behavior, and BANT lead scoring models instead of one overall combined lead scoring model.  We designed separate lead scoring models because we wanted the sales reps to be able to see what was driving a high lead score. 


Next we defined a point scale range for each lead score model.  The Demographic and BANT lead score models had a maximum value of 100 points.  The Behavior lead score model was uncapped.  I’ll explain later in more detail why we created an uncapped Behavior lead score model.


Then we weighted the importance of each question in each lead score model.  For example, if we had 5 BANT questions on the registration form, a simple approach would be to assign each question a maximum value of 20 points.  However, you can also use uneven weighting.  For example, with a lead scoring model with 5 questions, 2 of the questions could be worth a maximum of 35 points each and the remaining 3 questions could be worth 10 points each (so the maximum possible score is 100 points).  Finally, we discussed each model to decide what would be a meaningful threshold score for the lead to be considered a Marketing Qualified Lead.  You validate your initial models and determine the threshold score for a MQL lead by discussing how existing known leads would score against the lead scoring models.  In our case, we defined score levels for the demographic and BANT lead score models.  We defined A, B, C, and D score ranges across the 100 point system for each lead score model.  A/B were considered good scores and C/D were considered bad scores.


For the behavior score model, we left it uncapped.  As long as a lead kept engaging with campaigns, the lead’s behavior score would continue to increase.  We wanted the behavior score to be able to reflect the history of a lead’s activities.  We went through the same process of determining the threshold at which the behavior score indicated a marketing qualified lead.  We were planning to build an aging factor so any extended periods of inactivity would cause the behavior score to degrade over time.  We also wanted to be able to measure the rate of change of the behavior score.  A lead that engaged in 3 campaigns in 2 weeks might be more valuable than a lead that engaged in 3 campaigns over 3 months.


Finally, we developed two different buyer personas or segments.  We created separate segments for a technical buyer and an economic buyer.  All leads fell into one of these two segments.  Each segment had its own separate point scale system for the behavior score model.  For example, a technical buyer downloading a strategic whitepaper was a low behavior score, but an economic buyer downloading a strategic whitepaper was high behavior score.  For the BANT and demographic lead score models, everyone was scored using the same point system.  However, you could have potentially applied the segmentation to the other lead scoring models.  There are some system considerations in implementing the buyer segmentation that I’ll discuss in more detail later.


Building Lead Scoring Models in Marketo and Salesforce.com


Here are some detailed insights around building the actual lead scoring models in Marketo and Salesforce.com.  We found that certain types of models or formulas worked best in Marketo and others worked best in Salesforce.com.  In general, Marketo is good at adding to a lead score if a condition is met.  Whereas Salesforce.com was better to set a specifc lead score value based on a response to a questin.   We decided that it would be better to handle picklist questions with formulas and workflow/field updates in Salesforce.com.


Build demographic and BANT lead score models in Salesforce.com


Let’s start by looking at how we built lead scoring for a BANT question.  Suppose the question is What is your budget?  And the picklist of potential answers is a) I don’t have a budget = zero points, b) project is approved, but no budget yet = 15 points  c) project and budget are approved = 25 points.


In Marketo, you would build a Smart Campaign.  You would have to trigger on form submits and data value change across a wide range of fields.  Then the Flow logic would be:  if data value changes, value is “project and budget are approved”, then +25.  And you would create flow logic for the other picklist values.  The weakness with applying this flow step for lead scoring because it is an Additive function.  If the lead submitted the form a second time and changed the answer, Marketo would add an additional score value to the lead score total.  What we wanted was a formula that would set a specific value based on the picklist value. 
We chose to implement the scoring for that type of picklist question by using formula fields and workflows with field updates in Salesforce.com.  The Salesforce.com approach to building that lead score calculation would be to create a formula with CASE or IF statements based around the picklist values.


Another limitation of using the Data Value Changes trigger in Marketo is that Data Value Changes only triggers on existing leads.  When a lead is created for the first time in Marketo, it does not trigger Data Value Changes.  In Salesforce.com, I prefer to use formula fields over workflow field updates whenever possible.  A formula fires whenever a record is read.  A workflow will only fire when a record is created or edited.  Consequently, I prefer to build scoring models for picklist questions with formula fields in Salesforce.com.


In the example of the BANT lead score model, we had five separate BANT questions.  Each BANT question had a set of answers built with a picklist on the field.  Each BANT question had a maximum score and the combined maximum value of all the BANT questions totaled to 100 points.  Using the same BANT question example above where BANT question 1 is worth a maximum of 25 total points, we created a formula for the lead score associated with that specific question.  We created a formula field for the BANT lead score using CASE statements in Salesforce.com.


CASE (BANT1_field_name__c,  “I don’t have a budget”, 0, “project is approved but no budget yet”, 15, “project and budget are approved”, 25) +
CASE (BANT2_field_name__c, “….. etc) +
CASE (BANT3_field_name__c, “….. etc) +
CASE (BANT4_field_name__c, “….. etc) +
CASE (BANT5_field_name__c, “….. etc, 0)


One quick tip, Salesforce.com formulas can be picky when special characters are included in the picklist.  For example, we found that if you include a dash in a picklist, it’s important to not have any leading or trailing spaces around the dash for a formula to work.  For example, this picklist value “100 –  500” would fail in a formula.  Whereas this picklist value “100-500” will work in a formula.  In addition, we found problems when we wrote the formulas in Word and then pasted the text into Salesforce.com.  We had to type the formulas directly into Salesforce.com to get them to work properly.


Build the behavior lead score model in Marketo


Marketo was great for handling the behavior score model because the model was uncapped (i.e. it was not constrained to a maximum of 100 points).  We simply created smart list criteria with flow steps to add points to the behavior score field.  We built separate Behavior Score models for lead gen campaigns (e.g. webinars, whitepapers), inbound contact (e.g. submits a contact us form), free trial registration and activation, and page visits to key pages.  Then each of the individual behavior score models was combined into a total behavior score.  This would enable the sales reps to see what kind of activity was driving the total behavior score value.
Instead of having to create smart lists for every single campaign, we standardized the naming conventions of our Salesforce.com campaigns and used keywords in those campaign naming standards as triggers in a Marketo Smart List.  For example:


Smart List:
Trigger – Status is Changed in SFDC Campaign
Campaign Starts with keywords
New Status is memberstatus
Flow: Change Score
Score Name: Behavior Score Campaign
Change: +25
 

In addition, we standardized all behavior scores as being high, medium, or low point values.  Then we created tokens for those point values and used tokens in the Marketo Change Score flow steps.

We built separate personas or segments for different buyer roles and each persona had its own set of scoring for the demographic and behavior scoring models.  We created segments for a technical buyer and an economic buyer.  If a technical buyer that downloaded a technical whitepaper, that lead would get a high behavior score, but when an economic buyer that downloaded a technical whitepaper, that lead would get a low behavior score.  We worked with Sales to define different behavior scores to each persona.  To implement those segments, we created smart lists in Marketo that assigned each lead to a segment based on keywords in their job title.  Then we added the Role Segment into the flow steps on the behavior score models.


Perhaps the most painful part of building the behavior lead scoring models was the effort to back score all the existing lead behavior.  We had to create Smart Campaigns to score each individual campaign and run the existing Campaign Members through the flow step once to score the existing leads.

 



Additional Considerations and Lessons Learned


Data quality is important

 
Your lead scoring models will only be effective if you have clean, consistent data.  That means you need to establish standards for data values and naming conventions.  You may also have to clean your existing data.  For example, our demographic lead scoring model placed a higher score on Country = United States.  All of our registration forms standardized on “United States” as the value in the picklist for country.  And we cleaned all of the data in Salesforce.com and Marketo so that all the country data with USA, United States of America, US of A, etc., would be converted into United States.  As part of this standardization process, we came up with a consistent naming methodology for our Salesforce.com campaigns and campaign member status values.


The lead scoring models must be easy to use for sales reps


Sales reps don’t want to lose precious time trying to interpret a complex lead scoring model.  That’s where the Score Level fields that translate a lead score into a simple A/B/C/D scale make it easy for a sales rep to see which leads to focus on.  However, the details of the lead scoring models are useful for marketing purposes to be able to get a better understanding of what’s happening with your leads.  So we built fairly complex lead scoring models for marketing analysis, but built fields to simplify the complexity of the models for sales usage.


Registration conversion is impacted by the number of questions on your forms


What happens when you put lots of questions on registration forms?  The expected result is that you get lower conversion rates (higher landing page abandonment).  And that’s what we saw.  Marketing was asked to create a process that required leads to answer 14 questions to be able to register for any content.  8 of the questions were the essential contact information that we needed to create a lead and assign it to a business development rep.  The remaining 6 questions consisted of 1 firmographic (# of employees) and 5 BANT questions.  Compared to our previous registration process where we required only the 8 essential fields, we saw a 25% reduction in conversion rate across all of our landing pages.

 
An alternative might have been to employ progressive profiling to capturing the data.  Progressive profiling lets you display different questions each time a lead submits a registration form.  In our original model, we were using progressive profiling that captured the complete set of data after two registration form submissions.  However, in this case, Sales insisted that marketing require all 14 questions because their goal was to get to the highest score leads as quickly as possible – regardless of the impact on conversion rates.  The rationale was that highest quality leads would be so interested in our solution that they would be willing to answer 14 questions to get our content.  There was also an assumption that people would answer the BANT questions truthfully.


In addition, we created a poor user experience by making all 14 questions required on all of our registration forms.  That means that if a person wanted to download two pieces of content, then they would have to submit all 14 questions both times.  The good news is that if the Marketo tracking cookie was in place, then the form would pre-populate with the same responses that they submitted previously.


Should you include BANT questions on your registration forms?  


There’s research that shows that BANT data captured on registration forms has very low accuracy.  When you talk to analysts that specialize in marketing and sales process research, they’ll tell you that BANT data is best captured in phone conversations with leads (tele-qualification of leads).  After several months of capturing data, there was no correlation between the BANT score and lead quality.  In fact, we saw the opposite trend that the higher quality leads that moved down the funnel tended to have low BANT scores.  So either the BANT score model was wrong or people tend to lie when filling out forms, or both.  


In our case, the Sales team felt that BANT data was the most critical piece of lead scoring to prioritize their efforts despite the low correlation with lead quality.  So we made a decision to require answers to all of our BANT questions on all registration forms.  That is a significant decision when you consider that requiring 14 questions on our registration forms impacted conversion rates.  Faced with a similar decision, you’ll need to decide for yourself whether capturing data with low accuracy on registration forms is worth the tradeoff in lost conversion. 

Over the course of 12 months, we made 3 revisions to the BANT lead score model and none of them showed strong correlation with actual lead quality.  The revisions were driven by subjective input from the sales team as opposed to analysis of actual customer data.  The reality of lead scoring in a start-up is that you don’t have the bandwidth to spend lots of time analyzing data and you have to make your best possible estimate of how to improve processes and then execute.


It’s important to get executive buy-in on the lead scoring strategy


Ultimately, it’s critical to get executive buy-in on the lead scoring strategy.  Despite what the data and industry best practices may show, if you don’t get buy in for a given strategy, your lead scoring models could evolve in ways that prove to be ineffective.  In our case, the Sales team’s emphasis on the BANT lead score model to prioritize their efforts became the primary focus of our lead scoring efforts.


Final Thoughts


We built some pretty cool lead scoring functionality, but the jury is still out on whether it was effective.  At the end of the day, a lead scoring model only works if the sales team uses it.
 

Developing a lead scoring model is an important process to help get sales and marketing aligned with a common view of how the lead funnel is performing.  Lead scoring should be a process of continuous improvement and it should be a team process with sales and marketing working together to improve the model.  Start by building your best estimate at a lead scoring model, then capture data and continue to refine the model each quarter.

I’d also like to thank Flora Felisberto for her help in designing, building and testing these lead scoring models.

No comments:

Post a Comment