17 Oct 2024
3 min read

Updating Your MMM Model Weekly Isn’t Worth the Effort

Rockerbox - Daniel Lidral-Porter Written by Daniel Lidral-Porter
on October 17, 2024

When it comes to Marketing Mix Modeling (MMM), updating your model more frequently than once a month may not be the most effective use of your time and resources. Here's why sticking to a monthly update cadence makes sense.

Minimal Impact of New Data on MMM Results

MMM relies heavily on extensive historical data to accurately capture seasonality and long-term trends. Typically, our models use two years of data to ensure they can robustly measure these effects. Adding a single week of data to a dataset spanning two years constitutes less than 1% of the total data. This minimal addition is unlikely to significantly influence the model's outcomes because MMM aims to achieve a good fit across the entire dataset. The small shifts in results from weekly updates doesn't justify the effort required to collect and process new data every week (especially because MMM analysis typically incorporates channels which don’t offer an automated way to collect spend or impression data).

If the results of an MMM model do change significantly due to adding one week of new data, that points to a poorly-specified model. It’s implausible for the ROI of a channel to go from 2 to 5 due after adding one week to your MMM dataset, because the dataset is still 99% percent the same. If that does happen, that means the model is under-specified and can find too many different explanations of how your marketing is driving your KPI. The model should be improved so it consistently finds the most plausible ROIs, which will not shift drastically after adding in one new week of data to the dataset.

Time Delay Due to Backtesting

To ensure that the model's predictions can be trusted, it's best practice to backtest the model on new data it hasn't yet seen. This process verifies that the model's predictions can generalize to new data outside of its training set, ensuring it isn't "overfitting" on historical data. To achieve this, we reserve the last three months of data as a holdout set, which we don't allow the model to see during training.

After training, we check how closely the model's predictions for the holdout set match the actual data. If they're close, it indicates that the model has effectively learned how your marketing activity drives your KPI. It's crucial to use the most recent data for this holdout set to verify that the model has learned the current relationship between your marketing efforts and the KPI; using older data wouldn't provide insights into how well the model performs under current market conditions.

However, this methodology means there's always a multi-month lag before the model can learn anything from new data. It takes three months for today's data to become part of the training set and for the model to learn from it. Therefore, even if your model is refreshed weekly, it will still take approximately three months and half a week for the model to incorporate today's data. This means the difference between refreshing monthly and refreshing weekly, in terms of how long it takes the model to learn from a new data point, is minimal: three and a half months versus three months and half a week. While there's still some benefit to refreshing more often, it's much smaller than it may initially appear and not worth the increased time and effort most businesses to collect data on a weekly basis.

MMM's Strength in High-Level Strategic Insights

MMM excels at providing a macro-level view of marketing performance, usually at the platform level rather than drilling down into individual campaigns. This is because MMM is a statistical analysis that requires:

  • A significant number of data points for each channel: To ensure statistical validity, each channel needs enough datapoints to reveal meaningful patterns.
  • Observable effects amid random fluctuations: The impact of each channel on the business KPI being modeled must be substantial enough to stand out from the day-to-day random variations in that KPI. This means that each channel included in MMM analysis must comprise an appreciable portion of overall marketing spend.

These requirements mean breaking down channels into finer segments diminishes the confidence level of the results, making them less actionable. Because MMM requires relatively coarse-grained channels, it’s most effective when used to guide strategic decisions like platform budget allocation, which typically occur on a monthly or quarterly basis. In that context, updating the model more frequently won't offer additional value because such strategic shifts don't happen weekly.

Leveraging MTA for Tactical Decisions

For more immediate, tactical decisions—such as assessing individual campaigns or creative elements—it’s more effective to use Multi-Touch Attribution (MTA). MTA uses user-level data that captures the different marketing touchpoints each particular customer was exposed to. Using this data to gauge the relative performance (rather than the true incremental performance) of campaigns or creative elements requires fewer data points to reach acceptable confidence levels and can quickly adapt to new data, providing faster feedback than MMM.

In Summary

While it's important to keep your marketing strategies informed by the latest data, updating your MMM model more than once a month offers diminishing returns. The minimal impact of new weekly data on the model's outcomes doesn't justify the extra effort involved. By focusing on monthly updates, you ensure that your strategic decisions are based on robust, meaningful insights while maximizing the return on time & effort required to collect data. For quicker, tactical insights, complement your MMM with MTA analyses to keep your campaigns agile and effective.

No more confusion. Just real marketing insights.

Talk to our team about how Rockerbox can change the way you spend—for the better.