Facebook has revealed a physical ‘war room’ at its California headquarters as part of its efforts to tackle election interference tactics.
In September, in the run-up to elections in both Brazil and the US, it created a physical room at its Menlo Park campus from where it says its elections interference prevention experts can work together.
Inside the secure room are dozens of employees staring intently at their monitors while data streams across giant dashboards.
They are only allowed to leave the room for short bathroom breaks or to grab food to eat at their desks.
On the walls are posters of the sort Facebook frequently uses to caution or exhort its employees. One reads, ‘Nothing at Facebook is somebody else’s problem.’
That motto might strike some as ironic, given that the war room was created to counter threats that almost no one at the company, least of all CEO Mark Zuckerberg, took seriously just two years ago — and which the company’s critics now believe pose a threat to democracy.
Days after President Donald Trump’s surprise victory, Zuckerberg brushed off assertions that the outcome had been influenced by fictional news stories on Facebook, calling the idea ‘pretty crazy .’
Earlier this week the social network introduced new rules around political advertising in the UK, requiring anyone who wants to post political ads to verify their identity and location, and added the UK to its library of political adverts which also includes the US and Brazil.
Its technology draws upon the artificial-intelligence system Facebook has been using to help identify ‘inauthentic’ posts and user behavior.
Director of product management for civic engagement Samidh Chakrabarti said: ‘The war room has over two dozen experts from across the company – including from our threat intelligence, data science, software engineering, research, community operations and legal teams.
‘These employees represent and are supported by the more than 20,000 people working on safety and security across Facebook.
‘When everyone is in the same place, the teams can make decisions more quickly, reacting immediately to any threats identified by our systems, which can reduce the spread of potentially harmful content.’
‘Our machine learning and artificial intelligence technology is now able to block or disable fake accounts more effectively – the root cause of so many issues.
‘We’ve increased transparency and accountability in our advertising.
‘And we continue to make progress in fighting false news and misinformation.
‘That said, security remains an arms race and staying ahead of these adversaries will take continued improvement over time.
‘We’re committed to the challenge.’