The Algorithms that Control Us

Camron Godbout
4 min readSep 9, 2021

Behind the scenes all of your interests are highly optimized to provide you dopamine

Most of the digital content we consume is delivered to us through some algorithm. That algorithm might be machine learning based or something simpler. Examples of these content delivery systems include Spotify playlists, trending topics on Twitter, posts on a subreddit, what’s displayed on the explore tab of any social media app. All of these algorithms try to maximize the user to consume the content. The way these work is by using historical data of what we have consumed in the past and gives us permutations of what we have liked or consumed prior.

If you go on Instagram and you’re looking at the “explore” tab — chances are you see a bunch of what you’ve looked at in the past. For example if I look at a bunch of motorcycle pictures the “explore” tab is going to show a bunch of motorcycle pictures.

But is this really an “explore”? This is more of a “similar” tab.

The internet needs a completely different experience that is contra to current recommendation algorithms or feeds by showing us instead of trending up — what’s trending down. Instead of what’s similar to what we’ve been looking at, what is completely different from what I looked at yesterday.

All these content delivery systems have been highly optimized to keep you engaged on websites or applications, they’re not optimized to bring you value. These applications liken usage time to value delivered to the user. “If you’re using the application longer than it’s providing you more value” think of the key metrics on social media sites, they’re all based on making users come back: Weekly Active Users, Monthly Active Users, Time spent in app, etc. They highly optimize feed back loops to give you dopamine and what best way to continue getting you coming back to the content delivery than showing you content very similar to the content you have shown before. It’s quite lazy because it’s much simpler and risk adverse (making the user leave the app/page) to show similar content than experiment and show you content you may not connect with.

This type of recreated feedback loop over time limits human creativity and exploration. A great analogy on how this works is to observe a child that has no understanding or fundamental knowledge on how the world works. They spend most of their time exploring and finding things that provide stimuli. Now if we apply this iterative feedback model of content delivery that we adults are stuck in, it would be something similar to that of a child spending most of their time doing a single focused task rather than having an appetite for adventure and experimentation to see how the world works.

Not to say as adults we should spend the majority of our time focused on exploring and trying new things — there are value adds to spending a sizable time on tasks that we do like and reading and consuming content that is relevant. However, with current delivery systems we are not provided an adequate balance to exploration and greedily consuming content that aligns with the history of what we have consumed.

The problem is current content delivery systems are not providing us with enough diverse content for us to experience outside of what we consume on a daily basis. A great reference to this phenomena is https://paulskallas.substack.com/p/is-culture-stuck describing how since the mid 2000s (around 2005) our culture has been frozen. A great quote from this article is: “If you time travel back to 2007 wearing what you are now people wouldn’t know you’re from the future.”

How do we address this in the way we consume digital content through all these content delivery channels highly optimized to show us what’s similar? I believe there is a need to have an anti-recommendation algorithm — very akin to the “stumble upon” of the earlier 2000’s. Instead of showing a user the content they most are interested in flip the scores into ascending and show me what I least interact with or what I’m least likely to see. Maybe an even simpler solution is showing the similarity scores, the models prediction or score that I will like the content, to allow transparency to the user.

If you’re interested in this or other pieces on similar topics including math, statistics and machine learning please checkout the compute substack and subscribe for free for great content direct to you

--

--