By now you should be somewhat aware about engagement algorithms. The topic has been on the news with the recent testimony to a senate subcommittee by the Facebook whistleblower Frances Haugen. She is alleging that the company puts profit ahead of public safety, and that the tech industry should be more tightly regulated.
In short, an engagement algorithm is a set of artificial intelligence (AI) rules and calculations that determine which content is most likely to draw and retain your attention for the longest possible duration. Engagement algorithms are used by most social media companies as they compete for user attention in order to place targeted ads, which make them money.
Unfortunately, we are learning that the most sensational, shocking and controversial types of content usually draw the most interest from people, which then trains the engagement algorithm to promote and amplify more problematic content, such as conspiracy theories, fringe groups, among others.
According to Haugen, Facebook has done extensive research that shows content with sensational misinformation are being promoted by its engagement algorithm but that the company’s executives are not fixing the problem because of the potential negative impact on its ad sales.
She is advocating for regulating the tech industry, just as the government had done with the tobacco companies when it was discovered that they were hiding internal research about the hazards of smoking. Given that tech companies have grown to be so big and influential, I agree that sensible regulations are needed to protect consumers, as well as to better promote innovation and competition within the industry.
While AI algorithms are trained by human experts, many of the more sophisticated algorithms are not always understood by even data scientists who set up the rules and calculations. The algorithms can take on a life of their own, such as the chess playing program AlphaZero. Fortunately, engagement algorithms and search algorithms are being continually modified right now. This allows social media companies to keep some level of control of how their algorithms are curating content for their users. Now is the time to come up with a blue print on how to make sure that these algorithms learn to identify and suppress problematic content.
While social media companies, including Facebook, have started to do some self-regulating, we still need government oversight as well. Currently, section Section 230 of the Communication Decency Act provides a wide net of protection for website operators from liability for user-generated content. This type of sweeping protection was initially seen as a way of promoting differing opinions and free speech, and is still fiercely defended by many in the industry. However, the downside of engagement algorithms has re-opened the debate on whether we need to modify this law.
Already, the US government has passed into law an exception to Section 230. In April 2018, the Senate’s Stop Enabling Sex Traffickers Act and the House’s Victims to Fight Online Sex Trafficking Act became law, together known as FOSTA-SESTA. As a package, the law made it illegal for online services to knowingly assist, facilitate or support sex trafficking on their platforms. It also amended Section 230 to hold online services legally liable if their users were found to be in any way facilitating illegal sex acts over their platforms. We will need further exceptions to Section 230 to keep pushing social media companies toward taking more responsibility for the content on their platform.
The good news is that social media is still a relatively a young industry. Facebook was started in 2004 and Twitter in 2006. Initially the Facebook feed used be shown in a chronological order, before the company realized that engagement algorithms could keep its users on the platform for a longer period of time. I believe that the industry recognizes the need to monitor and tweak engagement algorithms to stop them from promoting hateful and hurtful content. We the public, in the meantime, need to educate ourselves about how engagement algorithms work, and hold social media companies accountable on how they influence our lives.
If you are interested in how we might regulate the tech industry, I’ve found this article, which first appeared on MIT Sloan Management Review, very interesting. It advocates six steps to properly regulate the tech industry: Create an overarching regulatory structure; Focus on 3 overarching objectives; develop standard-based regulations; prioritize based upon risks; make supervision digital by default; and collaborate with the private sector. Going into depth of this proposal is not the purpose of this blog post, but for context about arising views on regulating the tech industry.
I’d love to hear other thoughts on engagement algorithms and the possibility of further regulating the tech industry. Please comment or message me directly. Also, subscribe to this blog by email if you enjoy the content.