Skip to content

Finding the best bidding style

assorted billboards

I analysed the A/B test data of a hypothetical company to determine the best bidding type for selling a product. I found the data on Kaggle and felt it would be great for practice.

Read more…

  • You can view the more detailed analysis (with code) here
  • Here’s the GitHub repo

Skip to 💨

Background

Anonymous.com

A company recently introduced a new bidding type, “average bidding”, as an alternative to its existing bidding type, called “maximum bidding”. One of our clients, anonymous.com, has decided to test this new feature and wants to conduct an A/B test to understand if average bidding brings more conversions than maximum bidding. The A/B test has run for 1 month and anomnymous.com now expects you to analyze and present the results of this A/B test.

The test displayed the different bidding styles as advertisement campaigns to two groups. Participants in the control group viewed the maximum bidding campaigns, and those in the test group viewed the average bidding campaigns.

Method

I conducted a hypothesis test (Mann-Whitney U ) to determine which bidding type would increase conversions, in this case, the number of purchases.

I asked:

Which bidding type leads to more purchases?

My hypotheses were:

  • Null hypothesis: There is no difference between the mean purchases of the control group and the test group.
  • Alternative hypothesis: There is a difference between the mean purchases of the control group and the test group.

Alpha = 0.05 (5%).

Results

I got a p-value of 0.82 (82%). It is more than 0.05 (alpha). Therefore, I cannot reject the null hypothesis.

A p-value of 82% says that:

  • The difference between the average and maximum bidding styles is not statistically significant and is probably a fluke.
  • There is an 82% chance that we will get a difference as big or bigger than -10.
  • We are just 18% (100% – 82%) sure that the average bidding led to any real change. When you think about it, 18% is not substantial enough.

Limitation

The sample size of both groups was not equal. The control group had 1,066,894 more participants than the test group. The sample’s difference may likely have affected the outcome (p-value = 0.82 ). Whoever ran the test should have waited for the sample size of both groups to reach the same number before ending it. Unfortunately, I am in no position to rectify the error.

Recommendation

No significant difference lies between the average and maximum bidding. Therefore, there is no need to adopt the average bidding style. Using the average bidding to advertise products would give companies the same or similar conversions as the maximum bidding and increase campaign expenses.