Resources
Overview
Clarifai's original public prediction models were based on the simple paradigm of an image or video with concept predictions in a structured JSON response, batched together if necessary.
The Search API aims to extend that functionality by enabling developers to index images within a Clarifai application. Not only will users be able to receive predictions for that given content, but a variety of searches can now be run across that library of visual content. You can essentially find visually similar images, all instances of a General Model concept, or even images that appear similar to this image crop.
On an application level, the Search API can be utilized in a variety of powerful ways. These implementations can range from purely internal workflows and subtle background applications to more explicit user focused search and discovery experiences.
The Visual Search algorithm evaluates images at the pixel level and take the following items into account:
- Zoom
- Angle
- Colors
- Shapes, and
- Lighting
Thus, it would be best for your images to be as consistent as possible (e.g. always cropped) to achieve the best results.
Use Cases
Retail
- Consumer visual search
- Finding products in an inventory similar to a user's Instagram + Pinterest preferences
- An explicit input into larger recommendation systems
- "You may also like" product carousels
- Recommended items within drip campaign emails
Mobile discovery
- Flower app identification
Stock photography and content management
- Display visually similar images as the user browses within their library
- Search by image functionality
- De-duplication workflow
Design and content marketing
- Explore large and unstructured internal image libraries
Structure
Application
At the moment you can only search across what you've indexed on our Cloud and Clarifai users can index their visual content within a Clarifai Application. Applications can be created within your developer account at http://clarifai.com/apps.
Inputs
Inputs are images you would like store, train on, and/or search across. If you have an e-commerce catalog containing 10,000 product images that you would like to use visual search with, your Clarifai application would have to contain 10,000 inputs. One input = one image. The addition of an image to an application can be as simple and straightforward as an image URL.
As you add images to your Application, each image will be predicted on with the General Model. Each image is stored with the top 20 concept predictions from that model. Concepts represent visually similar features (objects, context, patterns, style), and this enables you to you 'Search By Concept' and instantly find all instances of 'dog', 'ocean', 'wildlife', 'abstract', 'portrait', etc across your indexed images. You can perform concept searches using any one of the 11,000 General Model concepts.
Metadata
If you'd like Images can be indexed with just an image url. Alternatively, you can index an image with associated Custom Metadata via the API. Metadata can be any arbitrary JSON, and can include items such as "Brand", "Size" or "Description". With this metadata added, you can perform filtered and more complex searches across your content.
Searching for all images with a metadata "brand" of "Adidas" is as easy as typing in the following in our search bar:
You can view the associated metadata for an image by going to the little 3-dot menu in the top right of it, or by clicking on the image and surfacing the menu in the top right.
Search Types
Search by Concept(s)
When you enter General Model and/or Custom Model concepts in the search field, the results will skew towards how those images predict with those concepts. You can add as many concepts to this search as you'd like to achieve maximum granularity.
Search by Image
Given an external image that isn't in your collection, you can find visually similar images within the application. In its simplest form it can be described as "here's an image, find all that are like this."
By default the API will return the top 20 most similar images each with a respective similarity score. Near duplicates and duplicates will typically result in scores of .99998+
In the UI, the results will be ordered by descending order of visual similarity score from the top left.
Search by Image + Metadata
This operation will conduct a search for visually similar sneakers to the given image but only amongst inputs that contain the given metadata "style" field of "running". Constraining this search can provide you with more specific results within certain product categories.
Search by Image + Concept
Similar to conducting a search with metadata, this search for a visually similar image across your application will be focused within a smaller pool of candidates. Searching by image with a concept will first find all images in your app that the General Model has identified as the respective concept ("leather" here) and then find the most visually similar images/products within that subset. This is often helpful if the given image is a visually distinct pattern but not necessarily an image of a whole product.
Cropped Search
Cropped search in both the UI and API allows you to pass in a crop of your original query image. Results will reflect that new crop's pixels and visually distinct features.