A/B Testing Reads- What happens when you reverse test?

1 replies
Hi- A question for the group- if we roll out with a new product feature (let's say a new filter on an app) where on a pre-post basis we see a dip in conversion but after 2 months we decide to actually test the feature and the results come out as that the existing variant (before the pre-post) actually did worse than rollout- how much of this have you experienced in your testing? The question to sum up is how often have you observed users getting used to a new feature (even if suboptimal) and how do you quantify the effect of that?
#a or b #reads #reverse
Avatar of Unregistered
  • Profile picture of the author savidge4
    Originally Posted by sueeton View Post

    Hi- A question for the group- if we roll out with a new product feature (let's say a new filter on an app) where on a pre-post basis we see a dip in conversion but after 2 months we decide to actually test the feature and the results come out as that the existing variant (before the pre-post) actually did worse than rollout- how much of this have you experienced in your testing? The question to sum up is how often have you observed users getting used to a new feature (even if suboptimal) and how do you quantify the effect of that?
    It would seem to me that adding a feature should be measured in 2 ways. In the beginning, actually adding the feature should be based on an amount of demand. Do people want the feature? This then leads to #2 are your users actually using the new feature?

    Taking Google as an example that determines success of a "Feature" as 80% or greater usability, Its not about the use at onset, but over an amount of time that determines success. So, your testing is not failing per se, it is simply displaying usability at launch and 3 months later.

    You then need to start looking at the curve of acceptance of the new feature? what can you do to increase usability of a new feature? One could easily argue that added features increase overall product value and leads to greater life of customer, and isnt that the actual goal?

    I would suggest that you should build in a method to identify usage vs testing conversion. Understanding the percentage of users that use a feature vs those that do not allows for you to focus on "advertising" the feature. You may have added the feature and people simply do not know its there... OR you didn't follow step #1 and determine the feature was wanted.

    Hope that Helps!
    Signature
    Success is an ACT not an idea
    {{ DiscussionBoard.errors[11635268].message }}

Trending Topics