Skip to content
Home » The Checked Box – How Tech Companies “Guide” You

The Checked Box – How Tech Companies “Guide” You

the checked box

I hate having to hit the unsubscribe button on marketing emails. Not because I actually want the email, oh God no. I mean, for someone who likes to maintain a clean inbox, the idea of not being spammed with emails I probably didn’t even knowingly sign up for, is immensely satisfying.

All I have to do is click the “unsubscribe” button, go to the webpage, uncheck all the boxes, and presto! I’m all done. Until the next email pops up. Sometimes, I get the feeling that the button actually doesn’t do anything. Maybe I should keep track of which platforms I’ve unsubscribed from.

But it points to a bigger problem. I start to think, when did I actually sign up for these? I mean, there is usually even an option to tell the website why you’re unsubscribing. One of the reasons being “I never signed up for these emails.”

If there was ever a blatant admission of guilt, this is it.

What companies have done, is that they’ve made it easy to sign up, but added friction if you ever want to unfollow. If you’ve ever thought about leaving a gym, you know what I’m talking about. You can sign up online, but you have to go to the HQ in person, between 2 pm-3 pm, on a Tuesday, and fill out paperwork if you ever want to stop paying the monthly membership.

Technology has enabled companies to guide our decisions without us knowing, using a neat little trick. Some call it manipulation in extreme cases, but the general term is choice architecture.

What is Choice Architecture

The design of the environment and systems which guide people to make certain decisions, without introducing any obvious limitation in the options they have. In other words, choice architecture seeks to get people to do something, without explicitly telling them to do something.

A classic example of this is how a store that is trying to encourage people to eat more healthy, would place their healthier offerings at eye level on the shelves, vs the unhealthier options which may be more out of reach or involve more work.

Choices may seem arbitrary, but it’s the sequential choices that determine how our personalities and lives develop. Sometimes, we might have too many options to choose from. A good choice architect would limit the number of options presented to their constituents to those options believed to bring the maximal marginal benefit. Even though we love having more options, how many times have we gotten stuck when it came to picking a movie to watch or a restaurant to eat at.

What Choice Architecture Isn’t

Here, we’re not talking about incentives and punishments. Those are great for building habits, but poor examples of good choice architecture. We are not trying to take away people’s autonomy. They should always have the ability, without any significant economic or personal loss, to change their choice.

Similarly, censorship is not choice architecture. It is manipulating the choice itself to not be an option, as opposed to changing the context around the choice.

It sound’s an awful lot like manipulation

Yes, it can be interpreted like that, but think of how we currently make choices, and how we currently define ourselves. Our behavior wasn’t just formed out of thin air. It is the result of both random and intentional interactions with the environment.

So, are we really okay with an understanding that our personalities are just a representation of a pattern formed by chaos?

If used correctly, choice architecture can make phenomenally beneficial impacts in society. A recent study found that the phrasing of the text messages sent to patients about upcoming vaccines had a significant impact on the vaccination rate.

Now imagine how big of an impact an altruistic government could make. It’s not difficult to come up with policies to improve people’s lives. It’s difficult to enact them.

The Love-Hate Relationship with Tech

The effects of choice architecture are most salient when people are unaware of the elements of the choice architecture. Given that everyone has their own biases, if they are told that they are being guided towards certain decisions, their reception is completely based on whether their ideology aligns with that of the overseer.

It is for that reason, that technology can play such a crucial role in choice architecture. The scale at which companies like Google and Facebook operate, as well as how entrenched they are in our lives, has made it easier than ever to give them our data.

You don’t have to venture far to get a taste of this. Go to a website where you have to create an account. You’ll most likely be presented with 3 options. Create an account with your email address, or use Google/Facebook authentication. Even if you try to use your Gmail address, you’ll be asked to use Google.

Your choices are there. You HAVE the option to not tie the account to data harvesters. But it’s extremely convenient to do so. A single click vs having to type the full email address. Once you do this, you’re done. Congratulations! Facebook now has one more aspect of your life it can look into to serve you ads.

I tried turning off Google Location Sharing

I consciously made the effort to try and reduce the data footprint I was giving to google. Let’s forget about the fact that at that time, I was using a Google Pixel phone.

I didn’t even last one day.

First, you get the warning that you’ll have limited functionality. All those apps that ask you to use location immediately stopped working obviously. If that was it, then it wouldn’t be such a big deal.

But to use those apps, I would have to go through several steps to change the settings back and forth. The UX designers at Google are really good. Frighteningly so. The objective of a smartphone is to add convenience. And here I was, spending 30 seconds to use an app for 5 seconds.

After a while, it became too much. What if I needed to use an app for an emergency? Would I be okay with this friction? I wasn’t even okay with it when I was just doom-scrolling through social media.

My choices weren’t limited, but the context in which I made them decided how I would react.

As technology gets more complex, and we get overloaded with the tools we want to use, we have decisions to make. Do we read the terms of the agreement every time? Do we explore all the default configurations and their implications? It’s impossible to say yes, because of how much inconvenience that presents.

How many of us will actually go and uncheck the default choices our tech benefactors have graciously selected?

Choice architecture is just a tool used by different entities. It can be used to provide a social structure that benefits us or it can be used to have us behave to benefit capitalistic endeavors.

In some situations, this sort of interventional design can also backfire whether the original goal was altruistic. People aren’t always the most naive beings and can catch on when they are being guided/manipulated. If that happens, it leads to more harm, conspiracy theories, and further mistrust in institutions.

There is no doubt that technology will be the forerunner in helping scale choice architecture to solve world problems like global warming, systemic racism, and general empathy. But it won’t work if most of the time, we’re on our guard from any checked box presented to us because of a history of misuse of defaults and nudging.