The company’s engineers developed a new method to help them identify and prevent harmful behavior some of which are users spreading spam, scamming others, or buying and selling weapons and drugs. They can now watch and model the behaviors using AI-powered bots by letting them loose on a parallel version of Facebook. Researchers can then study the bots’ behavior in simulation and experiment with new ways that won’t allow such.
The research is being led by Facebook engineer Mark Harman and the company’s AI department in London. Speaking to journalists, He said WW (which is the model been researched on) was a hugely flexible tool that could be used to limit a wide range of harmful behavior on the site, and he gave the example of using the simulation to develop new defenses against scammers.
To model this behavior in WW, Facebook engineers created a group of “innocent” bots to act as targets and trained a number of “bad” bots who explored the network to try to find them. The engineers then tried different ways to stop the bad bots, introducing various constraints, like limiting the number of private messages and posts the bots could send each minute, to see how this affected their behavior.
“We apply ‘speed bumps’ to the actions and observations our bots can perform, and so quickly explore the possible changes that we could make to the products to inhibit harmful behavior without hurting normal behavior,” says Harman.
“We can scale this up to tens or hundreds of thousands of bots and therefore, in parallel, search many, many different possible […] constraint vectors.”
He concluded and said:
“At the moment, the main focus is training the bots to imitate things we know happen on the platform. But in theory and in practice, the bots can do things we haven’t seen before,” says Harman. “That’s actually something we want, because we ultimately want to get ahead of the bad behavior rather than continually playing catch up.”