Multiple whistleblowers have told BBC that Meta and TikTok prioritised algorithmic engagement metrics over user safety controls, creating what they describe as systematic exposure to harmful content. The revelations focus on algorithm design decisions that allegedly increased harassment and exploitation risks, particularly affecting children and vulnerable users.

The whistleblower accounts describe internal tension between safety teams and product development units, with engagement optimization consistently winning resource allocation decisions. According to the BBC reporting, safety recommendations were regularly overridden when they conflicted with user retention targets.

The algorithmic governance problem extends beyond content moderation. Traditional risk frameworks typically address operational failures or compliance lapses after they occur. Algorithm-driven risks compound continuously across millions of users, creating liability exposure that scales with platform growth rather than diminishing with operational maturity.

For boards overseeing technology platforms or companies with significant digital user bases, the whistleblower accounts highlight a structural governance challenge. Product algorithm decisions that optimize for engagement metrics may systematically conflict with duty of care obligations, particularly regarding vulnerable user populations. The tension isn’t resolved through compliance checklists or quarterly risk reporting.

The timing matters for Indian companies expanding digital operations. Recent ESG disclosure requirements under SEBI regulations increasingly scrutinize social impact metrics alongside financial performance. Algorithmic design choices that foreseeably harm specific user groups could trigger regulatory attention and investor ESG screening processes.

The BBC investigation reveals how product development velocity can outpace risk assessment capabilities. Safety teams apparently lacked the authority to halt algorithm deployments that passed technical testing but failed broader harm-assessment criteria. This suggests a board-level authority problem rather than operational execution failure.

My Boardroom Takeaway: Directors at companies with significant digital user engagement should consider establishing algorithm governance protocols that require board-level approval for engagement optimization features targeting vulnerable user segments. The traditional approach of delegating all product decisions to management may create ESG liability gaps, particularly where algorithmic amplification affects children or creates systematic bias exposure. Independent directors may wish to request quarterly reports on algorithm design trade-offs between engagement metrics and user safety outcomes, with specific focus on how conflicting objectives get resolved at the product development level.