One of the issues I see every day with visual marketing is that many businesses continue to use stock images rather than create their own content. There is no question that including visual content in your social media and digital marketing when implemented well will increase engagement and reach.
Have you ever looked at the faces in the stock images and wonder what the people in the photographs are thinking? Well now there is a free resource available from the Microsoft Project Oxford team that is meant to be able to determine the emotion of people through facial expressions in photographs through machine learning and artificial intelligence.
The accompanying blog post states that the tools, many of which are used in Microsoft’s own products, are designed for developers who don’t necessarily have machine learning or artificial intelligence expertise but want to include capabilities like speech, vision and language understanding in their apps.
The emotion tool that was recently made available can be used to create systems that recognise eight emotional states – anger, contempt, fear, disgust, happiness, neutral, sadness or surprise and is based on universal facial expressions that reflect those feelings.
So now before you use a stock image or a bespoke photograph you have taken for the campaign you are about to launch, you might want to check the emotion that is being expressed in case the photograph does not support your campaign goals.
I decided to test the tool and used a photograph from the Web Summit blog (not a stock image) – you can upload an photograph, or like I did, just submit the url of an image.
You can see in the image below that against each face you can see what the tool believes the emotion of the person to be.
The facial recognition software can be used by developers for many programmes and campaigns – an example is the My Moustache microsite which uses the technology to rate facial hair and also has a fund raising element for Movember.
It just might be interesting to run some LinkedIn photographs through Project Oxford!
Microsoft is releasing a number of tools under Project Oxford over the next few months and their announcement referenced there would be some limited free trials for:
Spell check: developers can add to their mobile- or cloud-based apps and other products, recognises slang words such as “gonna,” as well as brand names, common name errors and difficult-to-spot errors such as “four” and “for.” It also adds new brand names and expressions as they are coined and become popular.
Video: this tool lets customers easily analyse and edit videos by tracking faces, detecting motion and stabilising shaky video. It will be available in beta by the end of the year.
Speaker recognition: this tool can be used to recognise who is speaking based on learning the particulars of an individual’s voice. A developer could use it as a security measure since a person’s voice, like a fingerprint, is unique. It will be available as a public beta by the end of the year.
Custom Recognition Intelligent Services: this tool makes it easier for people to customise speech recognition for challenging environments, such as a noisy public space. It will be available as an invite-only beta by the end of the year.
These are all interesting advances and I wonder what we might see in the year ahead in relation to marketing campaigns making use of artificial intelligence?