A user-initiated experiment, dubbed #WearthePants, has prompted scrutiny of LinkedIn's content algorithm after several participants reported significant increases in post visibility upon changing their profile gender to male. The experiment was conducted following reports of declining engagement by some users after LinkedIn announced in August its integration of Large Language Models (LLMs) to enhance content surfacing.
Participants in the #WearthePants experiment, which began with entrepreneurs Cindy Gallop and Jane Evans, aimed to test the hypothesis that LinkedIn's algorithm was biased against women. Michelle, a product strategist with over 10,000 followers, told TechCrunch that after changing her profile name to "Michael" and her gender to male, her impressions jumped 200% and engagements rose 27%. Similarly, founder Marilynn Joyner reported a 238% increase in impressions within a day of making the same changes to her profile. Gallop previously noted that a post she made reached 801 people, while the same content posted by a man reached 10,408 people, exceeding his follower count.
LinkedIn has maintained that its algorithm and AI systems "do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed." Tim Jurka, LinkedIn's vice president of engineering, and Sakshi Jain, Head of Responsible AI and Governance, have reiterated this stance. The company told TechCrunch that demographic data is used for testing purposes to ensure content from diverse creators competes equally and that the feed experience is consistent across audiences.
Social algorithm experts acknowledge the complexity of such systems. Brandeis Marshall, a data ethics consultant, told TechCrunch that platforms are "an intricate symphony of algorithms that pull specific mathematical and social levers." Marshall added that LLMs, being trained on human-generated content, can implicitly embed biases, stating that most platforms "innately have embedded a white, male, Western-centric viewpoint" due to who trained the models.
However, Marshall and other experts, including Sarah Dean, an assistant professor of computer science at Cornell, also pointed to numerous other variables that influence algorithmic performance. These include user interaction history, overall profile content, participation in viral trends, and even writing style. Michelle, for instance, noted that when posting as "Michael," she adjusted her tone to a more simplistic, direct style, which she believed contributed to the engagement boost. Dean also highlighted that "Someone's demographics can affect 'both sides' of the algorithm — what they see and who sees what they post."
LinkedIn stated that its AI systems consider hundreds of signals, including insights from a user's profile, network, and activity, to determine feed content. The company also noted a 15% year-over-year increase in posting and a 24% rise in comments, suggesting increased competition in the feed. While the #WearthePants experiment has highlighted user concerns and the opaque nature of proprietary algorithms, definitive conclusions regarding explicit gender bias remain challenging due to the multitude of interacting algorithmic factors.