Yesterday’s government ban on access to Twitter in Turkey is the perfect example of the problems that surround online imagery not being carefully monitored at the source before going live.

The reaction of the Turkish authorities was in response to more than 100 disturbing images of the Monday’s terrorist bombing in south-east Turkey, which killed 32 people, appearing on Twitter. The move follows a court ruling banning the publication of images of the attack in the media, particularly on the internet and social channels. Twitter is currently in the process of removing all associated images, and will remained blocked in Turkey until the work is complete.

This situation is embarrassing for Twitter and could also have business implications for advertisers on the social channel within or trying to reach that market. However, it is unlikely to deliver the reputational damage of a brand advertising alongside a disturbing or offensive image, which is a major risk unless accurate automatic monitoring of still or video images is implemented at source, particularly with user generated content, as was the case with Twitter.

Despite the increasingly visual nature of the internet, many companies still use outdated contextual data to validate a picture or video based on the text around it, or they rely on manual moderation and intervention. This may work for a small volume of video content within a controlled environment, but apply this to lots of user-generated content (UGC), which is what more and more are investing heavily in, and there’s a far higher risk of inappropriate or offensive content slipping through the cracks.

The good news is that technology that can ‘understand’ and classify video content as well as automate and improve the whole process of placing video ads already exists. Introducing an automated solution on upload that incorporates both visual recognition technology and brand safety criteria provides a comprehensive understanding of the visual content. This removes the threat of an unsafe image being uploaded before being spotted by a moderator, the web audience or the brand manager. It’s not only a safer solution, but can also be a more efficient one if the site has scale, as there is no need to bring in teams of human moderators. It also facilitates better targeting of ads, increasing their effectiveness.

If Twitter is finding it difficult now to identify visual content of this nature, imagine the problem it will face once its live streaming product launches. How long before a terror group uses this type of technology to livestream a terror atrocity?

Adrian Moxley, chief visionary officer at WeSEE

Originally posted WallBlog 23 July 2015