Facebook and other social networking sites are reading users’ chats to keep a check on criminal activity and notifying police if any suspicious behavior is detected, according to a report.
At least one alleged child predator has been brought to trial directly as a result of Facebook’s chat scanning, in March, according to Reuters’ report.
Facebook is among the many companies that are embracing a combination of new technologies and human monitoring to keep an eye on sex predators.
The scanning process begins with scanning software that monitors chats for words or phrases that signal something inappropriate, such as vulgar language and exchange of personal information.
The software pays more attention to chats between users who are not really in touch with each other and whose profile datas indicate a wide age gap. The scanning program is also “smart” — it’s taught to keep an eye out for certain phrases found in the previously obtained chat records from criminals including sexual predators.
If the scanning software finds a suspicious chat exchange, it notifies Facebook employees, who can then determine if police should be notified.
One of the Startegies that Facebook uses to protect the minors using the site is to limit how those under 18 can interact on the site and to make it harder for adults to find them. Minors don't show up in public searches, only friends of friends can send them Facebook messages, and only friends can chat with them.
Facebook Like most other social networking platforms generally avoids discussing its safety practices to avoid scaring away users, because it doesn't catch many wrongdoers.
"We've never wanted to set up an environment where we have employees looking at private communications, so it's really important that we use technology that has a very low false-positive rate," he said. In addition, Facebook doesn't probe deeply into what it thinks are pre-existing relationships,” said Chief Security Officer Joe Sullivan told Reuters.
Facebook works with law enforcement “where appropriate and to the extent required by law to ensure the safety of the people who use Facebook,” according to a page on its site.
“We may disclose information pursuant to subpoenas, court orders, or other requests (including criminal and civil matters) if we have a good faith belief that the response is required by law. This may include respecting requests from jurisdictions outside of the United States where we have a good faith belief that the response is required by law under the local laws in that jurisdiction, apply to users from that jurisdiction, and are consistent with generally accepted international standards.
“We may also share information when we have a good faith belief it is necessary to prevent fraud or other illegal activity, to prevent imminent bodily harm, or to protect ourselves and you from people violating our Statement of Rights and Responsibilities. This may include sharing information with other companies, lawyers, courts or other government entities.”
Companies can automate the software to take many defensive steps including temporarily silencing those who are breaking rules or banning them permanently. As a result, moderators at the company are notified after the threats are eliminated without human intervention.
"There are companies out there that are doing a very good job, working within the confines of what they have available," said Brooke Donahue, a supervisory special agent with an FBI team devoted to Internet predators and child pornography. "There are companies out there that are more concerned about profitability."
From a business perspective, however, there are powerful reasons not to be so restrictive, starting with teen expectations of more freedom of expression as they age. If they don't find it on one site, they will somewhere else.
But Facebook has cooperated with police investigations in other instances. In April, it complied with the Police by sending printouts of wall posts, photos and login/IP data of a criminal suspect.
Facebook is definitely setting an example here.