seedubjay

Life as a Lab Rat

19 March 2021

Who’s to say each visitor to a website needs to be shown the same website?

Instead, every visit can be an opportunity to experiment on users in a process called A/B testing.

Let’s say you’re designing a new website and want to put an advertisement in a spot which will be clicked on most often. Instead of surveying your users or using a focus group, you can run a simple experiment:

  1. Silently split visitors to your website into Group A and Group B
  2. Show each group a different version of the website which has an advertisement in different positions
  3. Relentlessly track each group’s activity to determine which version of the website produces the most profits

In its most benign form, A/B testing can be used to optimise simple things like font sizes and the ordering of menu bar items.

But it can also be applied to much more murky concepts: the addictiveness of a social media feed, what tone and wording to use in emails to convince users not to unsubscribe, and how long to set the timer for a made-up ‘Limited time only!’ shop discount.

This type of experimentation runs rampent on almost every large website on the internet. Tech giants like LinkedIn and Netflix run hundreds of experiments per day and can experiment on users according to their language, location, social group and more.

And unlike research in a university, no ethics board or regulator is required to approve or monitor these experiments. Nor are these companies required to ask for informed consent before you are experimented on.

Lab rat
Ironically, experiments on actual lab rats have more regulations to follow than experiments on the internet

For instance, Facebook conducted an experiment on almost 700,000 users for a week in 2012 by deliberately removing either happy or sad posts from the subjects’ feeds to see how it would influence what they wrote in Facebook posts later on.

Put simply, it was testing whether it could manipulate subjects’ emotions through their feed.

When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

In 2014, the dating site OKCupid published results from its own experiment where it tampered with the compatability rating (‘match percentage’) that it showed its users for each match:

We asked: does the displayed match percentage cause […] people to actually like each other? As far as we can measure, yes, it does. When we tell people they are a good match, they act as if they are. Even when they should be wrong for each other.

It’s remarkable that these companies were willing to experiment on their users’ emotions in this way.

But what’s more remarkable is that we ever got to hear about the experiments at all. They were disclosed entirely voluntarily as part of a research paper and a blog post, respectively.

Since the very public outcry after these A/B tests were published back in 2014, there’s been little to no mention of the practice by any of the tech giants.

Very few research papers announcing results, very little heated public discourse, and very few scandals.

Perhaps they realised the ethical dilemmas they were facing and stopped their experiments outright?

Or perhaps they noticed that the public gets rather grumpy when you tell them they are being experimented on, so it’s best just to keep it quiet.

Hmm. Which one, which one, which one could it be…

~ Footnote ~

I have a small confession to make… You are actually taking part in an A/B test right now.

(If you reload the page a few times you might spot what I've been testing in the first few paragraphs.)

The experiment operates silently in the background, and measures how far each visitor has scrolled through the page in the first 30 seconds.

Here are the results so far:

It is worth noting though that this A/B test differs quite a lot from a real-world test.

For one thing, it's testing something completely meaningless.

It's also specifically designed so that no personal information ever leaves your device, unlike a real-world test where there would be plenty of tracking and identification data being sent back to a central database as well.

And it's usually recommended you don’t tell your subjects what’s happening mid-experiment… obviously.


You Are Being Watched

And other scary stories about the internet


Colourful Beads and Cheap Trinkets

What does it mean to be private in the 21st century?