Don’t be blinkered to the data issues of ‘see, snap, buy’

Dave Gurney, CEO, AlchemetricsIt has long been recognised that a picture is worth a thousand words – a view reinforced by MIT research which shows that 90% of information transmitted to the human brain is visual – so, it is perhaps unsurprising that the visual search market is set for boom-times.
According to one forecast, the global market will surpass $14.73bn by 2023, growing at CAGR of over 9%. And while visual search is a long way from overtaking traditional text-based search, its growth trajectory can’t be ignored.
The evidence is there for all to see. Google Images is now the second largest player in the search engine market (behind only traditional Google), with 21% of searches starting there and Pinterest reports that its users carry out more than 600 million monthly searches using Pinterest Lens.
Meanwhile, GlobalData reports that 20% of app users make use of visual search when the feature is available, while a study by Slyce reveals that almost three quarters (74%) of consumers say text-based keyword searches are inefficient to helping them find the right product online when shopping.
There are many applications of visual search – for instance snapping a landmark to find out where you are on a map, or taking a photo of a plant to identify what it is, but the most mainstream application is as a shopping enabler.
Increasingly, consumers are looking to perform what has been coined: see, snap, shop. For instance, while walking down the street you see someone wearing a pair of shoes that you like. You take a photo of them, upload the photo to a visual search application such as Google lens, Google Images, Pinterest Lens, Asos Style Match and wait for the results to be returned – many of which provide you with the ability to buy the same shoes or a very similar pair.
Visual search is powered by computer vision and trained by machine learning. Computer vision essentially enables computers to see and crucially understand what they are seeing in order to enable them to do something useful with the knowledge. Machine learning can then add the nuances such as enabling the computer to tell the difference between slight variations, for instance returning the images of a similar pair of shoes.
Clearly, “see, snap, buy” adds another layer into the customer journey and retail brands, in particular, must add visual search to their omnichannel journey or risk out on significant incremental revenue.
This means understanding how to optimise images in order for them to be returned by visual search engines, considering visual ads on platforms such as Pinterest and identifying the impact of this on their customer data platform, such as the integration of a computer vision platform, not to mention the issue of managing the customer data that follows.
Along with voice search, vision search is widening out the search parameters and consumers now expect the ability to search in the most appropriate manner to the situation in which they find themselves. For retail brands it opens up a wealth of opportunity, however, companies ignore the potential data-based headaches at their peril.

Dave Gurney is chief executive of Alchemetrics

Print Friendly