Article sections

    Sample Ratio Mismatch (SRM) is a common issue in A/B testing that can significantly affect the accuracy of your experiment results. This article explains what SRM is, why it matters, how to detect it, and what to do if it occurs in your experiments.

    What is Sample Ratio Mismatch (SRM)

    SRM happens when there is a significant difference between the expected distribution of users in your test variants and the actual observed distribution. For example, if you set up a 50/50 traffic split but see 65% of users in Variation A and 35% in Variation B, that may indicate an SRM.

    This unequal distribution will of course, affect your test results, making them unreliable.

    Why SRM Matters

    An SRM indicates that something may be wrong with your experiment setup or tracking. If left unchecked, it can lead to false conclusions and poor decision-making.

    SRM may be caused by:

    • Incorrect implementation of the experiment script
    • Broken or delayed JavaScript execution
    • Filters or targeting rules that unintentionally block users
    • Traffic issues, like bot traffic or caching problems
    • Manual changes made during the test (e.g., pausing a variation)

    How to Detect SRM

    Omniconvert Explore will do this for you automatically. When the system detects a miss-match, it will display the following notification on the experiment:

    Potential causes for SRM

    There are multiple situations with potential to cause SRM, like:

    1. Changing traffic allocation mid-experiment

    Changing the traffic allocation for the variations of the experiment while it is running.

    2. Having unequal traffic splits for the variations

    Having an experiment on Control 10 – Variation 90 might cause the SRM notification to show.

    3. Changing device type mid-experiment

    Changing the devices which are eligible to show the experiment while it is running, for example moving it from ‘All Devices’ to ‘Mobile only’.

    5. Not excluding bots properly

    Although the tracking code is constructed so it does not work for bot traffic, there can still be edge cases where bots scrape only one variation, inflating its session count.

    6. Not excluding internal traffic properly

    Team visits the page repeatedly during reviews or demos.

    Internal testing tools hit the experiment endpoint hundreds of times.

    What to do if you notice SRM in one of your experiments

    We compiled a list of common DOs and DON’Ts which can be very helpful when troubleshooting SRM.

    1. Adding or removing device types during a live test

    DON’T

    • Don’t add or remove which devices are eligible for the experiment while the test is running.
    • Don’t assume all traffic goes equally if the device segment has been altered.

    DO

    • Plan the full list of which device should show the experiment before launching.
    • Lock the test scope and avoid last-minute edits.

    2. Changing traffic allocation mid-experiment

    DON’T

    • Don’t change the traffic allocation between the variations unless the test is completely stopped.
    • Don’t edit live variations — it resets conditions unfairly.
    • If you have an experiment with 3 variations, and you see one of them behaving better, do not increase the traffic allocated to it, while decrease it for the other 2. Instead, if you decide to only test 1 variation, duplicate the experiment, delete the 2 unwanted variations and publish the new experiment.

    DO

    • QA and approve all variations before going live.
    • Clone and restart a test if major edits are needed.

    Check experiment traffic logs; spikes or gaps often reveal mid-test pauses.

    Contact our support team

    If you’re not sure whether your test is affected by SRM or need help investigating a possible mismatch, reach out to our support team using the chat feature in your Explore account.

    Was this post helpful?