Want to blow up your SaaS business? Ignore C4.

Want to blow up your SaaS business? Ignore C4.

Story Highlights:

  • There are MANY SaaS metrics to consider these days. Today we will review "C4"—which includes CLV, CAC, CRC, and Churn—and explain why they matter to a SaaS subscription revenue model.

  • Any one of the C4 elements can blow up a business.

  • Therefore, we must have a deep understanding of C4, how each element is calculated, and what we can do to manage and improve them.

Want to confuse your board along with all your employees? Start by showing them the below table of 59 different SaaS metrics.

These days, SaaS metrics are abundant if not overgrown. Similar to the investment industry, you can go down the metric rabbit-hole pretty quickly before you realize, "Wait a minute, what are we actually trying to accomplish here?"

But there are four key metrics that rule them all. You guessed it: C4. What is C4? 

This explosive composite combines four vital SaaS diagnostics:

  1. CLV: Customer Lifetime Value

  2. CAC: Customer Acquisition Cost

  3. CRC: Customer Retention Cost

  4. Churn

Churn—which typically garners most of the limelight—is the most cancerous, yet easiest to calculate. Churn is typically expressed as either a dollar figure or a percentage of revenue over a certain time period.

For example, let's say a SaaS startup has $1MM in monthly recurring revenue (MRR). Last month, a customer paying $10k/mo churned. Therefore, churn could be expressed as:

  • $10,000 MRR

  • Gross churn = 1%

Simple enough, right?

Churn is insidious, even maddening at times. But once properly understood and effectively managed, churn can materially improve how your run your business. The key is to treat every churned customers as an archeologist might approach a dig. The good stuff is down below, and you'll have to institute a process to have these conversations, diagnose root cause, and ultimately arrive at a LEARNING that will improve how you do business.

For example, you will likely lose a customer for product reasons. Perhaps you lacked the feature, functionality or performance they seek. This information MUST reach the ears of the Product and Engineering teams so that they can prioritize such items in their release planning.

Is there anything worst than a churned customer? Yes: when you fail to learn something as a business. That, in other words, is the greatest disservice of all.

 

 

Say it with me: "Trust and Commit"

The awkwardness was palpable. As the meeting droned on, the evidence began to mount: no agenda, lack of focus, petty side arguments, no meeting owner at the helm. . . not even a god-damn notetaker to capture the stream of insanity. It was going from bad, to worse: meeting hell.

Sound familiar?

Welcome to my weekly leadership meeting circa Q3 2015. Yeah, it was that bad, maybe worse.

How is this relevant? In a word: trust, the lack of which manifests itself in many ways much like the meeting I described.

Let's take a step back: great leaders have the ability to run effective meetings and channel energy towards healthy debate and ultimately group decisions. Great leaders are able to stitch together a tapestry of diverse perspectives at the seam of their similarities (or differences) and MOVE FORWARD with decisiveness and clarity. But what could get in the way?

Lack of trust.

And here's the catch: most meetings suck, and most leaders are too frazzled from the last meeting/offsite/marathon to really deliver their best 100% of the time.

Allow me to hit the pause button for a dose of positivity: Big things are accomplished only through the perfection of minor details. Why this quote? Because little improvements to say, meeting structure, can go a LONG way. For example, basic meeting etiquette suggests an agenda with time allocated to each topic, a meeting owner/organizer, and a notetaker. If decisions are to be made based on information, a meeting pre-read should be included. This is basic stuff and helps establish TRUST across meeting attendees.

But trust is much deeper and greater than just meeting etiquette. Its about people. Relationships. And what you need to do to earn/build/foster trust. As Stephen M. R. Covey shares with us in The Speed of Trust:  "We judge ourselves by our intentions, and others by their behavior. Leadership is getting results in a way that inspires trust."

Their are specific behaviors that these leaders embrace:

Remember the scene in Inception where they have to actively increase their consciousness of the dream state in order to offset the skepticism the dream world is imparting? A good leader will call into consciousness the fact that the team lacks trust and get them to focus on it. Bring it front and center and have it about.

Then what?

I've since heard three powerful words that inspired me to write this article: "trust and commit". Say it with me: "trust and commit." One more time, "trust and commit." I can't HEAR YOU !!??

Okay, that's enough.

The point is, these three words reshaped our team's ability to collaborate, discuss, decide and execute. "Trust and commit" became our mantra and a pivotal threshold at which the group would decide to "trust and commit" or not. If trust and commit was achieved, that means the decision was fully-baked and all stakeholders had signed off. More importantly, all stakeholders would be ACCOUNTABLE to the "trust and commit"(ment) they had made.

By simply having a decision-making mantra, our team was able to review and discuss initiatives quicker, make decisions faster, and execute more consistently.

Okay, one more time: "TRUST and COMMIT!!"

[Editor note: Thank you to our readers for all your calls and emails since this post was published.]



Superforecasting to the rescue (again)

Superforecasting to the rescue (again)

Welcome back to forecaster training, inspired by Part V of Edge.org's Master Class in Superforecasting. The unique skill of superforecasting resonates deeply with DBT Ventures due, in part, to the immense impact across the four key components of the DBT endeavor: ideas, data science, customer success, and leadership.

This segment draws heavily from Danny's contingent valuation experiments which, if you haven't perused before, are a hygienic read (1,832 academic citations agree).

The contingent value experiments reveal the similarity between 3 superficially very different things:

  1. Subject's judgement of value, i.e. scope sensitivity

  2. Likelihood of an event happening between 2 different time periods

  3. Scenario bias

For example what is more probable: the first scenario, or the second?

 

. . . while continuing to manifest a vexing problem: people's judgement of explanations and forecasting accuracy are vulnerable to rich narratives, i.e. attribute substitution. 

We can also fall prey to assigning too much probability to too many possibilities which violates the axiom of probabilities to begin with.

Yet scenarios CAN be useful when thinking backward in time. The relationship between counterfactuals and hindsight bias (which we discussed previously) is powerful.

Getting people to imagine counterfactual alternatives to reality is a way of counteracting hindsight bias. Hindsight bias is a difficulty people have remembering past dates of ignorance. Counterfactual scenarios can reconnect us to our past states of ignorance. And that can be a useful, humbling exercise. Its good mental hygiene. Its useful for de-biasing. 

"One learns from Shakespeare that self-overhearing is the prime function of soliloquy. Hamlet teaches us how to talk to oneself, and not how to talk to others." -Harold Bloom

Get people to listen to themselves think about how they think, i.e. can you build the capacity to listen to yourself talk to yourself. . . and decide if you like what you hear, a fleeting achievement of consciousness to be sure, but relevant to superforecasting nonetheless.

So how can superforecasting improve the world? Well, we could use forecasting skills to improve the quality of high-stakes policy debate. Today's political discourse is NOT motivated by pure accuracy goals. Quite the opposite. And political pundits have a myriad of habits/tactics/issues which actively remove accuracy from the conversation:

  • Ego defense

  • Self-promotion

  • Loyalty to a community of co-believers

  • Rhetorical obfuscation

  • Attribute substitution (big one)

  • Functionalist blurring, and—one of the most pervasive—

  • Super (qualified) forecasting

So what should we do? Introduce a superforecasting tournament in order to disrupt "stale-status heirarchies" and invite pundits to compete. Boom. Politics solved.

 

A patent flyby

Today we will take a 30,000-ft flyby tour of the wonderful world of patents. First, let's start by clearly defining what a patent actually is: A patent is a set of exclusive rights granted by a sovereign state to an inventor or assignee for a limited period of time in exchange for detailed public disclosure of an invention. An invention is a solution to a specific technological problem and is a product or a process. Patents are a form of intellectual property.

Good, glad we got that out of the way.

Once your patent application is approved, you get a patent. It shows up in the mail and looks like this (I know because I have a few):

What's important to know ahead of time, is the government is a bureaucratic entity and grossly inefficient. The patent application process is cumbersome, redundant, slow and expensive. Its good to embrace this expectation now so you don't turn back mid-flight.

It's not the government's fault entirely—volume surely plays a part too. For example, in 2014 there were 615,243 patent applications. That's a lot of applications to sift through (about 1,685 per day) each with the requisite formwork. Interested? Here is where you can find all the forms needed to file your very own patent application. 

Not sure if you want to go all in? You have a quicker, less-expensive option: behold, the provisional patent. Have you ever seen "patent pending" on a doohickey? You guessed it: the doohickey's inventor used a provisional patent to temporarily protect his or her idea to 1) lock in the filing date, and 2) allow more time to file a non-provisional patent in parallel.

Provisional patents are, as far as patent law goes, interesting. It's important to know their history:

Since June 8, 1995, the United States Patent and Trademark Office (USPTO) has offered inventors the option of filing a provisional application for patent which was designed to provide a lower-cost first patent filing in the United States and to give U.S. applicants parity with foreign applicants under the GATT Uruguay Round Agreements [you don't need to know this].

A provisional application for patent (provisional application) is a U.S. national application filed in the USPTO under 35 U.S.C. §111(b). A provisional application is not required to have a formal patent claim or an oath or declaration. Provisional applications also should not include any information disclosure (prior art) statement since provisional applications are not examined. A provisional application provides the means to establish an early effective filing date in a later filed nonprovisional patent application filed under 35 U.S.C. §111(a). It also allows the term "Patent Pending" to be applied in connection with the description of the invention.

See the part I put in italics? That's the part you should know. In short, a provisional patent is a cost-effective way to start the patent clock ticking and give yourself 12 months to file a non-provisional patent (if you decide to). Your invention could, after all, suck. And who wants to go through a lengthy patent application process for a sucky invention? No one.

To be complete, a provisional application must also include the filing fee as set forth in 37 CFR 1.16(d) and a cover sheet* identifying:

  • the application as a provisional application for patent;

  • the name(s) of all inventors;

  • inventor residence(s);

  • title of the invention;

  • name and registration number of attorney or agent and docket number (if applicable);

  • correspondence address; and

  • any U.S. Government agency that has a property interest in the application.

A cover sheet, form PTO/SB/16, pages 1 and 2, is available at www.uspto.gov/forms/index.jsp.

Enough boring forms already.

Allow me to share the story of the most valuable patent in history: 

Lipitor, a cholesterol-lowering drug used to help reduce heart attack and stroke risk, represents the most value patent in history. It actually expired on June 28, 2011. We'll get to that later.

lipitor.jpeg

Pfizer filed a patent application for Lipitor on 2/26/91, which issued on 12/28/93. The product was launched in the market in 1997, with revenues peaking at $12.6 billion in 2006. By the end of 2009, total revenue was greater than $105 billion. Yes, you read that correctly. $105 billion. It became the most profitable patent ever produced, making it more valuable than most companies in the S&P 500. However, when it expired in 2011, the patent became worthless.

Based on this information, why would a company use patents?

Patents provide protection in a variety of ways. They give the owner the exclusive right to exclude someone from practicing the invention in the market. They protect something functional or utilized (e.g., new engine design, drug compound). They allow for abnormal market profits inherent in the monopolistic nature of a patent, and patent owners can price skim if patent utility presents a strong value proposition. Furthermore, patents can command treble damages for willful infringement.

While advantages exist with patents, several disadvantages must also be considered:

  • Patents are expensive. One patent can cost anywhere from $10,000 to $50,000. An international patent can cost upwards of $250,000!

  • Patents have short useful lives, with the typical statutory life of 20 years or less.

  • Patents require full disclosure, revealing specific design information to competitors.

  • Patents lose value every day on a present value basis.

  • Lastly, patents are expensive to defend. A typical patent lawsuit in the United States costs $3 million or more.

The team at DBT Ventures hopes you found this patent flyby helpful and/or interesting. Comments welcome!

Why NPS is crucial to scaling Customer Success

Why NPS is crucial to scaling Customer Success

Simply put, building a Net Promoter System (NPS) will give your business a sustainable, competitive advantage. This insightful management tool can be used to gauge the loyalty of a your customer relationships. It serves as a modern alternative to traditional customer satisfaction research.

To calculate your NPS score, you begin by asking your customers a simple question: How likely are you to recommend [Your Company] to a friend or colleague? (Scale 0-10). You then bucket your responses based on their score:

  • Promoters: 9-10

  • Passives: 7-8

  • Detractors: 0-6

Next, take the % of promoters (# promoters / total responses) and subtract the % of detractors (# detractors / total responses). The best NPS is 100%. The worst NPS is -100%.

But why does this matter? A few reasons:

  1. Retention: detractors are 2-3 times more likely to churn. Therefore, you should treat detractors like ticking churn grenades.

  2. Growth: promoters account for 80-90% of positive word of mouth. Therefore, you should treat promoters like referral machines which buoy your company's reputation in the marketplace.

  3. Feedback: this is arguably the most important, and the most actionable. Detractors will provide you with feedback on WHY they don't recommend your product, service or company. You will then have a decision: take action, or don't take action. This feedback loop is critical in the evolution and improvement of your functional teams.

  4. Alignment: the NPS framework is a great way to align EVERYONE in your company—from the intern to the CEO—around a common dialect that reinforces a "customer first" mentality.

On a tactical level, DBT recommends the following vendors to build out NPS:

  • Get Feedback: survey tool (integrates with Salesforce; sample survey)

  • Salesforce: CRM

  • Marketo: automated distribution of NPS email with link

  • Qualaroo: use for obtaining in-product NPS scores

But what does a good NPS look like? Well, it depends on your industry. Since DBT primarily works with high-growth technology companies, we can share that most of our clients are targeting a NPS score in the 50-65 range.

NPS by industry

NPS by industry

Some companies, particularly in the mobile space—like Uber—use a modified NPS system. A ten-point scale doesn't design nicely within the limited real estate of a smartphone, so a lot of companies use a five-point scale and require a reason if you are a 1-3. For example, Uber's mandatory NPS feedback loop looks like this:

Lastly, here are several of the companies considered (by some) to be NPS leaders in their space. B2C companies tend to have significantly higher NPS than B2B customers.

If Chuck Berry was a CSM

[Chuck Berry's birthday was yesterday. The legendary rock and roll guitarist turned 89. Happy birthday Chuck!]

Assuming your Customer Success Manager team has avoided the pitfall of being subsumed by technical support (there should unequivocally be a separate function this), you typically have three schools of thought on how to comp CSMs:

  1. Revenue (broadly)

  2. Product usage & adoption

  3. Discretionary

Revenue: this is arguably the best way because 1) revenue matters, 2) CSMs can directly impact it and achieve upside, and 3) it creates a sense of ownership over their accounts.

Product usage & adoption: this is the second best option. Your customers won't derive value from your software/product without usage. Therefore usage is key to obtaining ROI, earning organizational adoption, and securing renewals and growth.

Discretionary: this option is easy in the sense it requires little overhead for the manager; however, it is highly subjective and doesn't clearly align CSMs with any tangible business impact. Therefore, it is the worst option (although it is quite common).

Side note: In terms of base/variable split, most CSMs we've encountered are on an 80/20 split, e.g. $80k base, $20k variable, $100k OTE. The $20k variable is often paid quarterly ($5k per quarter). Salaries range from $75k (entry level) to $180k (very senior).

Let's unpack the revenue method: CSMs typically own a "portfolio" of accounts. CSMs work with customers to understand their goals, and connect those goals with capabilities of your product the success of which will be increased usage, increased value and ultimately retained/expanded revenue.

Since Chuck Berry's birthday was yesterday, let's say a hypothetical CSM—named Chuck—starts Q4 with 80 accounts paying $5k MRR for a total of $400k MRR or $4.8MM ARR. In each month customers can churn, expand, or renew with no change (flat renewal). 

You can therefore calculate a net retention metric for Chuck:

Chuck Berry's hypothetical Q4 performance.

The formula for net retention = 1 + ((Expansion + Churn)/MRR managed). So in Oct-2015, we'd have: 1 + (($3,000 + (-$4,000))/$400,000)), or 99.75%. Sum it all up, take the average, and Chuck delivered an average monthly net retention of 100.04%, or 100.48% annualized (100.04^12).

Now the question becomes: Is that good or bad based on your business model?

Generally speaking, net retention below 100% is bad. That means you have a churn problem and customers are net leaving you. That is a separate conversation. Best in class net retention is 101-102% monthly, or 112.7%-126.8% annualized. Companies that DBT is advising are targeting 101.6% monthly net retention which equals the nice round number of 120% annualized.

A fair CSM comp model might look something like this: if Chuck achieves between 99-100% net retention, he gets his OTE quarterly bonus of $5k. BUT, if Chuck gets 100%, or 101% or 102% he can hit accelerators and earn $7k, $10k, or $15k respectively.

Some might gawk at paying a CSM $15k in a single quarter, but think about what Chuck has done for your business: he has net expanded his portfolio by 2% each month (102%) which is $8k MRR per month or $24k MRR for the quarter. $24k MRR is $288k in additional annual revenue for your business! $15k represents 5.2% of the gain, a modest price to pay for enviable net retention metrics.

Think about THAT next time you're listening to Johnny B. Goode.

Go Johnny go.

ROC your world

ROC your world

The importance of statisticians in SaaS

If you're going to explore data science strategies for your SaaS business, you'd be well-served to learn about "ROC curves".

Why?

Because ROC curves assess the quality of data science output. Think of ROC curves as a report card. They help you visualize the quality of the data science deliverable on your desk.

For example, let's say your data science team (or consultants) builds a model to help your sales team identify which prospects are most likely to buy. We'll call it a "Propensity To Buy" score. And since businesses love lingo, we'll call it a "PTB" score. Acronyms, FTW.

 

Two models walk into a startup

To step a quick step back: data science models typically fall into two camps: 1) regression: trying to predict a continuous outcome or variable, or 2) classification: trying to predict a binary outcome. Our fictitious PTB score is therefore a . . . you guessed it, a "classification" model. Nicely done. Now we're getting somewhere.

But how do you objectively assess the quality of something very smart people produced by ingesting dozens if not hundreds of variables and training sets? The ROC curve. Boom.

We can thank WWII radar engineers for the lengthy name: Receiver Operating Characteristic. But their intent was much simpler: they needed a way to know how much of the good stuff their model captured (true positive rate/TPR) vs. the amount of bad stuff their model also captured (false positive rate/FPR).

For example:

  • TPR: Radar imaging model captures a Nazi battalion of Panzer IV tanks = nice work

  • FPR: Radar imaging model captures a herd of very large French cows = needs work

Same goes for business: how many of your prospects are being correctly classified (TPR) vs. incorrectly classified (FPR). Here's a visual of ROC curves look like in the wild:

We'll get into this topic much deeper in future posts, but for now we just wanted to make sure the DBT readership is aware of the crucial tool for assessing data science output.

Click to enlarge

Effort without consistency is like interest without compounding

Effort without consistency is like interest without compounding

There is a battle underway, and most of us are losing. This isn't a battle overseas with platoons strategizing their next move. This is a battle of the mind. A battle for mindshare. To the victor goes our cognitive focus.

Mindshare.png

We fight this battle daily: hundreds of emails, native ads, social media intrusions—all of which are enabled by the average person's insatiable need to check our smartphone (the latest research suggests we do this at least 150 times per day). For the mathematicians in the house, that's once every 10 minutes over 14 waking hours (at a minimum). 

What we sacrifice is focus. Focus is becoming a scarce resource for today's knowledge worker and leader. And when we sacrifice focus we dilute our EFFORT, and therefore, results.

Effort with consistency is like interesting without compounding.

This troublesome dynamic is supported by a mountain of literature, e.g Harvard Business Review's The Cost of Continuously Checking Email.

But there's no value is denying this reality, so we must adapt. Therefore, we'd like to offer our readers a few thoughts on how to navigate this battlefield, particularly when is comes to goals.

In my view, there are two types of goals:

  1. Binary: you either accomplished the goal, or you didn't; it’s a singular, one-time deal.

  2. Recurring: an activity you seek to repeat by (hopefully) forming systems/habits.

For example, a binary goal might be: I will summit Mt. Everest by July 4th, 2017. You are either going to summit Mt. Everest by July 4th (and triumphantly stake an American Flag), or you will fail to summit Mt. Everest. There is a singular moment of accomplishment or attainment. Most executives track their quarterly goals on a goal sheet and cross them off upon completion.

Recurring goals are repeating by nature: you must accomplish the goal routinely, over time. For example: I will practice Transcendental Meditations twice a day for 15 minutes for 5 days per week. A recurring goal is designed to form a habit—a very powerful human ability. We define success as having accomplished 80% of activities you set out to do, e.g. a goal of meditating 5 days per week—or 60 times per quarter—would be deemed “completed” if you meditated 48 times (80% x 60).

Today's digital battlefield of distraction makes recurring goals extremely challenging. To win, we need to extract our recurring goals from our goal sheet into a separate system.

For your consideration, we offer you: The DBT Recurring Goal Sheet. 

As the saying goes, If you can't measure it, you can't manage it. How else can you really track a dozen or so recurring goals with any truthfulness? The above framework provides a simple process for tallying your progress for all goals that aren't a singular, binary event.

To get a copy of the DBT Recurring Goals Sheet along with our DBT Goal Sheet (Binary + Recurring on 1 page) within our Leadership Library, please navigate to the Contact page and fill out the form so we can email it to you (we promise not to spam).

We hope you find these two pieces of artillery helpful in the battle for mindshare to accomplish your professional goals.

Do you have what it take to be a superforecaster?

Three top traits of superforecasters include:

  1. They tolerate dissonance

  2. They practice "counterfactualizing"

  3. They embrace (unabashedly) rampant hypothesis generation

Want to learn more?

Jump on over to Edge.org to witness a fantastic synthesis of genius minds challenging each other's thinking, but moreso unpacking their questions.

Edge Master Class 2015: A Short Course in Superforecasting

About Edge.org: To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.


Remembering 9/11

Today at DBT we remember the 2,977 victims of the ruthless 9/11 terrorist attacks in 2001. May their lives not be forgotten, along with the American liberties we hold dear: life, liberty and the pursuit of happiness.

For it is these liberties that afford Americans the freedom to work hard in order to improve the lives of their families, their fellow Americans, and themselves. 

5 Traits of Best-in-Class Optimization Teams

5 Traits of Best-in-Class Optimization Teams

What are your best customers doing?

That is the #1 question I hear from customers on a day-to-day basis. How do others companies do optimization and testing? It’s a great question.

Based on thousands of interactions with Optimizely customers and four years of enterprise enablement, I can confidently point to five traits that all best-in-class optimization teams possess:

  1. They’ve established a habit of optimization.

  2. There is a clear “owner” of the optimization program.

  3. The C-suite cares about optimization (and acts on it).

  4. Optimization goals are aligned with key company metrics.

  5. They make it fun.

1. They’ve established a habit of optimization.

 

“We are what we repeatedly do. Excellence, then, is not an act, but a habit.”

—Aristotle

Today, Aristotle’s adage above still rings true. It also highlights a cornerstones of all successful optimization programs: HABIT.

In Charles Duhigg’s The Power of Habit we learn that habits are a three-step loop: cue, routine, reward. The cue is what triggers the routine. Thankfully, when Google launched Google Calendar in April of 2006 humans obtained an easy way to design their own cues. Enter: the “repeating” meeting for the win.

Sounds trivial, but all of our best customers embrace some form of the recurring meeting format. It is the forcing function that furthers their optimization endeavor.

Do you have a repeating meeting on your calendar to create your company’s optimization habit?

A few other examples of habit-forming meetings:

  • Weekly optimization standup (Forbes)—technical review of pre-launch experiments

  • Weekly results review (HomeAway.com)—identify learnings from completed tests

  • Quarterly KPI evaluation (Crate & Barrel)—goal alignment, deliverables for the quarter

  • Weekly prioritization meeting (TicketMaster)—stack rank based on effort vs. impact quadrants

2. There is a clear “owner” of the optimization program.

When it comes to execution, a world-class optimization program relies on people. Humans who work to design, manage, and ultimately execute against a plan.

Whether your team is an army-of-one or 50+ people, the linchpin is most certainly the program manager, e.g. the optimization “owner”.

Ask yourself: Who wakes up in the morning and thinks about optimization at my company? If there isn’t an owner, assign one or hire one. Otherwise your optimization program will likely flatline.

Here is what this role typically looks like on LinkedIn:

This critical role takes the time to:

  • Crowd-source testing ideas from the org

  • Consolidate them in a testing backlog

  • Prioritize the backlog based on KPIs and effort vs. impact

  • Communicate with—and get buy-in from—stakeholders

  • Green light tests for execution in a centralized project plan (see below)

  • Track and communicate results and inferred learnings

  • Iterate. Use what was learned to inform the go-forward strategy.

This is a lot of work for someone who isn’t 100% committed. For this reason, they can’t be a part-time lover (yes, that’s a Stevie Wonder reference on an optimization blog).

If you don’t have the resources internally, its not the end of the world. Look to evaluate Solutions Partners who can help steer the ship for you.

Sidenote: Best-in-class programs also have substantial access to developer/IT resources. If you don’t have this benefit, it might be time to make some new friends in that group. Arming yourself with Red Bull, quirky dev humor, and knowledge of the new Civ will earn you major points. Developer support of your optimization program will add substantial octane to the engine. Rev it up!

3. The C-Suite cares about optimization (and acts on it).

If your leadership team cares about A/B testing and optimization you’re in good place. But talk is cheap, so we look for clues that they actually walk the talk. Does your leadership team:

  • Allocate strategy & technical resources to optimization?

  • Review results regularly?

  • Suggest ideas for testing?

  • Say, “I don’t know, let’s test it”? or “We should test that.”

  • Provide guidance and direction on quarterly optimization goals?

  • Prevent certain stakeholders from blocking the deployment of winning tests?

Without executive sponsorship, building a best-in-class optimization program can be a scratch & claw uphill battle. The Roadmap to Building a Testing Culture eBook contains a number of ideas to get their buy-in.

4. Optimization goals are aligned with key company metrics.

In our 6 Best Practices article we highlight “defining quantifiable success metrics” as the #1 driver of success. But the industry leaders take it a step further: their testing goals are not only well-defined, but also aligned with their key company metrics. For example, a retail website like The Honest Company would align their goals as such:

This alignment helps them deprioritize less-relevant tests by keeping their eye of the prize, i.e. improve Customer Lifetime Value, and ensures your testing program doesn’t go off the rails into random-behavior land.

(Disclaimer: I don’t agree with the premise of this cartoon at all, but I do think its hilarious.)

5. They make it fun.

The best-in-class companies make optimization fun.

Three weeks ago I attended the Zappos Culture Camp: a 3-day deep dive into their special sauce that’s fueled their ridiculous growth to $1B+ in revenue and compelled Amazon to acquire them in 2009 for 40x EBITDA. It’s also worth mentioning that Zappos AOV is ~$130 vs. Amazon’s ~$50. Boom goes the dynamite.

Impressive numbers aside, Zappos is unique for another reason: they’ve built a company culture that intentionally values fun, e.g. Zappos Family Core Value #3: Create Fun and a Little Weirdness.

Here are a few ways we’ve seen optimization made fun:

  • Submit an idea competition! (IGN)

  • Test of the week/month (HomeAway.com)

  • Quarterly A/B Headline Hackathon (CNN)

  • Company-wide recognition for the person that suggested a winning experiment (A&E)

  • Host a quarterly off-site and invite optimization experts to speak (Mozilla)

I hope you’ve enjoyed reading this! If something resonated with you—or I completely missed something—please post a comment below.

Don’t forget the technical side…

The 5 traits above are mostly organizational & non-technical in nature. I’d be remiss to not mention the technical side of the optimization yin-yang. Here are the 3 Technical Best Practices we recommend based on best-in-class optimization programs:

1. They make their product and visitor data available client-side.

  • This is a game-changer.

  • This could be in the form of cookies, Javascript variables, custom tags, etc.

  • This is really important because you—as a technical person—have now enabled non-technical folks to leverage the data that’s available.

2. They really understand all the nooks & crannies of their site.

  • What cookies does your company already use? What’s in those cookies?

  • How do you leverage the cookie data for optimization and personalization?

  • Knowledge of their website(s) page hierarchy and URL structure

  • Strong grasp of the moving parts of your site: dynamic content, AJAX, etc.

3. They ensure that existing processes doesn’t stand in the way of velocity from a development perspective.

  • Streamlined QA and development process due to the reduction of red tape

  • Is it really necessary to create a fully functional design and requirements doc for a CTA or image change?

  • Don’t let perfect be the enemy of good.

Thoughts on organizing & measuring a CSM team

Thoughts on organizing & measuring a CSM team

Reposted from an interview with Aircall.io on August 8, 2015.

Optimizely is a San Francisco-based startup and the leader in A/B testing and experience optimization. I recently sat down with Luke Diaz, a manager on the Customer Success team—and founder of DBT Ventures—to share his experience and advice for organizing and measuring a customer success team.

Luke currently leads a 15-person Customer Success Manager (CSM) team in charge of over 80% of Optimizely’s revenue. The CSM team manages launch (onboarding), success management (adoption & value), renewals (retention) and expansion (account growth; in tandem with the Sales team).

The CSM team is one team within the larger 70-person Customer Success team at Optimizely which includes Technical Support, Strategy, Solutions Architects, and Education.

From my (personal) standpoint, Optimizely is very advanced on the topic. Yet every startup—whatever their stage of development or price point—can learn from Luke’s very actionable tips:

  • Measure the value customers extract from your product

  • Start customer success with the sales team

  • Transform your customer’s organization to achieve success

In order to effectively lead the CSM team, Optimizely focuses on 3 metrics:

  1. Customer value derived from the product

  2. Customer satisfaction

  3. Revenue generation

Luke articulates below on how these objectives are translated into processes and culture.

Measure the value customers extract from your product

Optimizely tracks the activity of each Enterprise customer: usage logs, number of A/B tests run, etc. Seems obvious – all serious SaaS businesses (should) do that. To do so, Optimizely uses a blend of Totango (a Customer Success Intelligence software) and proprietary regression models built by their data science team, e.g. churn score, upsell score, account potential score, etc.

What’s more, they follow the number of successful experiments (in their case, delivering a clear A/B winner), and, as much as they can they can, actuallycompute the value in $ generated by successful experiments.

As Luke says:

“When I plan a customer business review, I hope to have a very clear, factual view of the $, or millions of $, we’ve helped them generate using Optimizely. If it doesn’t make dollars, it doesn’t make sense.”

ROI is most explicit when displayed in revenue for, say, a Retail or E-commerce site. But this information is not always easy to gather, especially when dealing with SaaS or media businesses. How do you measure the actual value of improving lead conversion on an optimized sign-up form, or additional clicks on a media module?

Optimizely’s team is  considering adding such functionality and reporting  into their product, or integrating with technologies like Moat (ad viewability) to convey ROI better.

Beyond this #1 metric, Optimizely measures customer satisfaction (metric #2) using NPS (Net Promoter Score) surveys.  They run regular NPS surveys at brand level (4x a year), at the end of the onboarding phase (8-10 weeks), and after each interaction with the support team. The NPS results give an idea of the service quality, the perceived value, and a proxy for customer loyalty.

Finally, the #3 metric is revenue. Luke and his team are incentivized on the net retention rate of their portfolio. They’ve actually manage to have a negative net churn rate (reminder: net churn formula = gross churn – expansion + contraction), around (.5%), in recent months. One simple metric used for almost the entire CSM team.

Luke is currently experimenting with two additional functions on the CSM team:

  • Launch Manager: a new role dedicated exclusively to the enterprise onboarding process (measured by NPS and volume)

  • Mid-Market CSM: higher volume, lower touch account management approach (measured by renewal rates & net retention)

Start customer success with the sales team

As Luke shared with me:

“I feel lucky to work with the Sales team we have at Optimizely. They are some of the most empathetic and intelligent folks I’ve ever worked with, and they put the customer’s needs and goals first and foremost to ensure a proper fit. Sales and Customer Success have crafted a strong partnership which is imperative to achieve best-in-class net retention.”

Beyond the performance of the Customer Success team, the complementary process to ensure a negative churn rate is a sales validation process implemented by Optimizely’s VP of Sales, Travis Bryant: whenever a sales rep identifies a new account, he or she is  required to populate a 45-question validation form before closing the deal. Simple, straightforward, objective, Yes/No questions designed to determine whether  the new prospect is actually a good fit for Optimizely (Steli Efti, from Close.io, shares a similar philosophy although he uses different method).

According to Luke, the validation form by itself isn’t what guarantees the quality of new customers, but it sets the baseline for having a culture of customer success and retention inside the Sales team. It implies a shared agreement between the Sales and Customer Success teams that each customer is a good fit for Optimizely.

In addition, Luke  personally screens every new customer and gives a special importance to validating their needs. According to Luke, with this process, truly “bad” enterprise customers are extremely rare.

It’s common practice in businesses to build retention metrics into Sales people’s incentives to avoid chasing customers without a longer term view. Optimizely’s approach is interesting in the sense that it clearly incentivizes Sales people on revenue generation, but ensures coordination with the Success team—which owns net retention—using a validation process, along with tight collaboration.

In the spirit of transparency, they entire Optimizely organization receives an email alert whenever an enterprise customer decides to churn.

“This simple workflow ensures that all employees—from the intern to the C-level—are in the loop when we fall short for a customer, and it often sparks internal dialogue about priorities and opportunities to change, iterate and refine.”

Transform your customer’s organization to achieve success

Believe it or not, a major part of the customer’s ultimate success does not rely on your customer success team, your sales team, or your product, but rather: in your customer’s organization. We came to this conclusion as Luke was sharing his best and worst customer success experiences.

Worst? The biggest challenge is when the customer lacks the skills or people to execute: generate ideas, setup experiments, QA, measure, rinse & repeat. According to Luke, the main reason for “customer failure” with Optimizely is a disconnect between a buying decision made by a senior executive and the actual resources available in the team to use it and get value out of Optimizely’s software. The help close this gap, Optimizely has curated a network of 80Solutions Partners to help their customer build—or accelerate— their optimization program.

Another challenge: earning executive mindshare at the VP and C-suite level. In a recent survey of 500 CMOs, optimization ranked #12 out of 17 various marketing priorities.

“We are crafting our sales and account management strategy to uplevel the conversation and earn executive sponsorship. This strategy, along with the coming product releases (e.g. personalization) will ultimately make Optimizely unturnoffable.” 

Best? Luke’s most memorable customer success happened when one of  the Customer Success team members managed to convince a client to make a hire in order to lead and improve optimization initiatives, after demonstrating that the first tests generated a 15% increase in revenue. Optimizely provided the data to help the customer’s Marketing Director make the case for net new headcount, thereby creating transformational change in the company. “In my opinion, it was one of our proudest moments,” Luke said.

Worst scenario, best scenario: both are related to the customers’ resource allocation, and that’s one key lesson for all SaaS out there: the ROI your customers derive from your software is correlated to the resources devoted to it. Convince your customers to organize for success!

http://blog.aircall.io/customer-success-optimizely/