If you or your loved ones spend time on social media, the likeliness of only seeing content recommended by an algorithm is pretty high. Social media platforms such as Facebook, Instagram, and TikTok, provide entertainment to their users and strive to keep their audience engaged.
They achieve it with the help of sophisticated algorithms developed by them. They can recommend content that users want to see, keeping the users for long periods on the platform. The usage of algorithms has proven to be a very lucrative technique for social media giants. The more time people spend on those sites, the more adverts they will see, thus more likely to purchase something.
So even though many people enjoy receiving personalized content, the use of such social media techniques that keep audiences engaged for long periods could also be alarming. For example, if users happen to like cat videos, suggestions with more cat videos would be something many users might appreciate. However, engaging content can be addictive and can negatively impact people, but companies do not do much to adjust it.
For example, Meta’s Facebook users are strongly discouraged from seeing content in chronological un-moderated order. So even if users find their way getting to the “Most Recent” button that will list posts in reverse chronological order, users would still see their Facebook News Feed go back to its algorithm-controlled state once they close the website or shut the app.
Big Tech generally enjoys the perks given by AI – algorithms help them keep advertisers happy. For example, TikTok, an app popular amongst children, had to remind users to take breaks from using the app. The video-focused social networking service even hired some of their top content creators to produce content that asks TikTok users to take breaks. Yet again, this is just a drop in the ocean when it comes to limiting the usage of algorithms.
Many believe that algorithms are sometimes the main drivers of misinformation too. Suppose you’ve liked a specific type of content. In that case, the algorithm will likely suggest more similar content, which sometimes leads users into a rabbit hole where all they see is content that keeps them engaged but also misinforms them. Even though social media companies have been trying to beat misinformation, this is still considered an ongoing problem.
AI moderated content sometimes fuels attention towards harmful content. For example, a go-Kart was recently pulled over on a freeway in Los Angeles. The short-distance vehicle was driven by YouTubers who were producing a stunt for social media content. Algorithm-fueled content can unknowingly encourage others to watch social media challenges that could lead to severe real-life consequences, injuries, and even death.
Big Tech techniques to keep users engaged could also be harmful to democracy and have life-changing consequences. Last year a Google whistleblower Zach Vorhies explained how Google “re-wrote their news algorithms to specifically go after Trump” in an interview with the Epoch Times. Both left-wing and right-wing audiences sometimes don’t get to hear each other’s stories simply because they are engulfed by liberal or conservative topics suggested by algorithms.
Lawmakers are trying to push legislation that would force tech organizations to allow users to access versions of their platforms that aren’t shaped by algorithms. And Big Tech is not very happy about it. Social media giants are not the only companies that use such algorithms to engage with their audience. Other major tech companies such as Amazon, Netflix, Spotify, Apple, and Google will also be heavily affected if the legislation passes.