Some helpful links:
v2 Developer Guide
v2 Account Plans
v2 Pricing

Hear ye hear ye! Our v2 API is the hot, new standard and we want everybody to be a part of it! So what does that mean for everyone who's accustomed to using v1 of our API? In short, you can do everything in v2 that you could in v1 and more, including:

  • Teaching our API to recognize new concepts with Custom Training
  • Searching images by visual similarity, tag, or a combination of both with Visual Search
  • Adding custom information (like price, SKU, brand, etc.) to images with custom metadata
  • Exploring and managing your models and media assets with our clean and intuitive user interface
  • Getting the full picture of your account activity with better usage reporting
  • Authenticating with keys that never expire, and
  • A new Video API!

Read on for a walkthrough on how to easily and seamlessly transition your projects from v1 to v2 so you can take advantage of our new usage-based pricing (link at the top) and access the full range of features in our v2 API!

Step 1: Log on to our Developer Hub here

(The email and password are the same as v1)

Step 2: Verify Your Email

After logging in, click on the Profile option in the left-hand menu and then scroll down to the Email section:

Your email address may be verified already, but if it isn't you'll want to click on that "Resend Verification" link to the right of your email address. This will move you from the Unverified Email Plan to the v2 Free Plan.


Step 3: Create some API Keys!

Unlike our v1 API, our v2 one uses Keys instead of Access Tokens, and these will never expire. You can view and create these via the API Keys menu item:

You'll have a default application that you can apply your new key to and you can also create more Applications via the Applications link. This key is your gateway to our API so make sure to keep it safe and secure!

For more info on API Keys check out our dedicated help article on them here


Step 4: Check out our new models!

We have a variety of new visual recognition models that you can choose from in v2 so check em all out! You'll now be able to access all of the v1 ones as well as Apparel, Celebrity, Demographics, Embeddings, Faces, Focus, Logos and Moderation. Furthermore, you can now train your OWN models in v2 using Custom Training, which you can learn more about here.

Here's a quick look at the new model IDs for your v1 models before we jump into the code:

  • General: aaa03c23b3724a16a56b629203edc62c 
  • Food: bd367be194cf45149e75f01d59f77ba7 
  • NSFW: e9576d86d2004ed1a38ba0cf39ecb4b1 
  • Travel: eee28c313d69466f836ab83287a54ed9 
  • Wedding: c386b7a870114f4a87477c0824499348
  • Color: eeed0b6733a644cea07cf4c60f87ebb7 


Step 5: Understand your new plan

You'll notice from our new Account Plans that things have changed a bit with the naming and pricing. The good news is that we've gone ahead and transitioned you to one of these automatically! 

Once you log in, go to the Billing section. You'll see that you are now on the v2 Free Plan or the v2 Unverified Email Plan, and if the latter applies make sure that Step #2 is completed. Go ahead and change your plan to "Standard" by clicking on the "Change Plan" button if you'd like to sign up for our Paid Tier (and get unlimited API usage).

That's it!


Step 6: Adjust your code for v2!

Below you'll find some code examples for cURL, JavaScript, Python, Java and Objective-C. We'll be using the General model in all of these and more help can be found in our client installation instructions and quickstart examples so feel free to reference those if you need to.


cURL

So you're using http requests - great! As you may recall, the 'Tag' endpoint was used to tag the contents of your images or videos in v1. In v2, we renamed this endpoint to 

models/{model-id}/outputs

to more accurately reflect what our technology is doing. Remember that when we 'Tag'ged an image in v1, we told it which model to use, and we essentially do the same via thing in v2 via the model's id.

In v1 the API allowed for both GET and POST requests for tagging but v2 only uses POSTS (at least for now...). Images and videos also use a different parameter now so make sure to account for that as well:

v1 API

IMAGES or VIDEOS

Tag a URL

curl "https://api.clarifai.com/v1/tag/" \    
   -X POST --data-urlencode "url=https://samples.clarifai.com/metro-north.jpg" \    
   -H 'Authorization: Bearer TOKEN'

Tag a local file

curl "https://api.clarifai.com/v1/tag/" \    
   -X POST -F "encoded_data=@/Users/USER/my_image.jpeg" \    
   -H 'Authorization: Bearer TOKEN'

v2 API

IMAGES

Tag a URL

curl -X POST \  
  -H "Authorization: Key API_KEY" \  
  -H "Content-Type: application/json" \  
  -d '  {    
     "inputs": [      
       {        
          "data": {          
             "image": {            
                "url": "https://samples.clarifai.com/metro-north.jpg"               }    
          }      
       }    
     ]  
}'\
https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs

Tag a local file

curl -X POST \  
  -H "Authorization: Key API_KEY" \  
  -H "Content-Type: application/json" \  
  -d '  {    
     "inputs": [      
       {        
          "data": {          
             "image": {            
                "base64": "'"$(base64 /home/user/image.jpeg)"'"                    }    
          }      
       }    
     ]  
}'\
https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs


VIDEOS

Tag a URL

curl -X POST \  
  -H "Authorization: Key API_KEY" \  
  -H "Content-Type: application/json" \  
  -d '  {    
     "inputs": [      
       {        
          "data": {          
             "video": {            
                "url": "https://samples.clarifai.com/beer.mp4"      
             }    
          }      
       }    
     ]  
}'\
https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs

Tag a local file

curl -X POST \  
  -H "Authorization: Key API_KEY" \  
  -H "Content-Type: application/json" \  
  -d '  {    
     "inputs": [      
       {        
          "data": {          
             "image": {            
                "base64": "'"$(base64 /home/user/video.mp4)"'"                      }    
          }      
       }    
     ]  
}'\
https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs


Javascript

Now let’s take a look at some JS! Note that video calls have an extra {video: true} parameter in v2.

As with v1, v2 installs via Npm:

$ npm install clarifai

In v2 you can also replace the model id with other id aliases defined in the V2 client such as:

Clarifai.GENERAL_MODEL, Clarifai.NSFW_MODEL, Clarifai.WEDDING_MODEL, Clarifai.TRAVEL_MODEL, Clarifai.FOOD_MODEL, Clarifai.COLOR_MODEL, Clarifai.APPAREL_MODEL, Clarifai.FOCUS_MODEL, etc.


v1 API

// Initialization with CLIENT_ID and CLIENT_SECRET
Clarifai.initialize({
  'clientId': '{clientId}',
  'clientSecret': '{clientSecret}'
});

// Get tags for an image or video via url
Clarifai.getTagsByUrl('https://samples.clarifai.com/metro-north.jpg',{model: 'general-v1.3'}).then(
  handleResponse,
  handleError  
});

// Get tags for an image or video via image bytes
Clarifai.getTagsByImageBytes('G7p3m95uAl...', {model: 'general-v1.3'}).then(
  handleResponse,
  handleError
);

v2 API

// Instantiate a new Clarifai app passing in your shiny new API Key
var app = new Clarifai.App({
  apiKey: 'YOUR_API_KEY'
});

//
// IMAGES
//

// Predict the contents of an image via URL
app.models.predict(Clarifai.GENERAL_MODEL, 'https://samples.clarifai.com/metro-north.jpg').then(
  handleResponse,
  handleError
);

// Predict the contents of an image via image bytes
app.models.predict(Clarifai.GENERAL_MODEL, {base64: 'G7p3m95uAl...'}).then(
  handleResponse,
  handleError
);

//
// VIDEOS
//

// Predict the contents of a video via URL
clarifai.models.predict(Clarifai.GENERAL_MODEL, 'https://samples.clarifai.com/beer.mp4', {video: true})          
  .then(log)          
  .catch(log);

// Predict the contents of a video via local file
clarifai.models.predict(Clarifai.GENERAL_MODEL, {base64: 'AAAAIGZ...'}, {video: true})          
  .then(log)          
  .catch(log);


Python

For the Python client, v2 installation will be exactly the same as v1 but you will need to import these extra headers:

from clarifai.rest import ClarifaiApp
from clarifai.rest import Image as ClImage
from clarifai.rest import Video as ClVideo

Client installation:

$ pip install clarifai


The code!

v1 API

Import the client
from clarifai.client import ClarifaiApi

# Instantiation
api = ClarifaiApi('CLIENT_ID', 'CLIENT_SECRET')

# Get tags for an image via url
results = api.tag_urls('https://samples.clarifai.com/metro-north.jpg', model='general-v1.3')

# Get tags for an image via image bytes
results = api.tag_images(open('/home/user/image.jpeg', 'rb'), model='general-v1.3')

v2 API

# Import the clients
from clarifai.rest import ClarifaiApp
from clarifai.rest import Image as ClImage
from clarifai.rest import Video as ClVideo

# Instantiation
api = ClarifaiApi(api_key='YOUR_API_KEY')

#
# IMAGES
#

# Predict the contents of an image via url
results = app.models.get('general-v1.3').predict_by_url('https://samples.clarifai.com/metro-north.jpg')

# Predict the contents of an image via a local file
results = app.models.get('general-v1.3').predict_by_filename('/home/user/image.jpeg')

#
# VIDEOS
#

# Predict the contents of a video via URL
model = app.models.get('general-v1.3')        
video = ClVideo(url='https://samples.clarifai.com/beer.mp4')      
model.predict([video])

# Predict the contents of a video via a local file
model = app.models.get('general-v1.3')        
video = ClVideo(filename='/home/user/video.mp4')      
model.predict([video])


Java

Onto good ole Java! For all of you Android folks out there, you can call executeAsync() at the end of your 'Predict' method for an easy way to take care of an asynchronous task instead of writing an AsyncTask boilerplate. 

Also note that v2 uses a totally different function for Videos than Images:

v1 API

// Installation

// Maven
<dependency>
  <groupId>com.clarifai</groupId>
  <artifactId>clarifai-api-java</artifactId>
  <version>1.2.0</version>
</dependency>

// Gradle
compile "com.clarifai:clarifai-api-java:1.2.0"

// Instantiation
ClarifaiClient clarifai = new ClarifaiClient("{clientId}", "{clientSecret}");

// Get tags for an image or video via url
List<RecognitionResult> results = clarifai.recognize(new RecognitionRequest("https://samples.clarifai.com/metro-north.jpg")
        .setModel("general-v1.3"));

// Get tags for an image or video via image bytes
List<RecognitionResult> results = clarifai.recognize(new RecognitionRequest(new File("/home/user/image.jpeg")
        .setModel("general-v1.3"));

v2 API

// Installation

// Maven
<dependency>
  <groupId>com.clarifai</groupId>
  <artifactId>clarifai-api-java</artifactId>
  <version>2.3</version>
</dependency>

// Or Gradle

// Add the client to your dependencies
dependencies {
  compile "com.clarifai.clarifai-api2:core:2.3"
}

// Make sure you have the Maven Central Repository in your Gradle File
repositories {
  mavenCentral()
}

// Instantiation
final ClarifaiClient client = new ClarifaiBuilder("API-KEY").buildSync();

// ---Usage---
//
// IMAGES
//
// Predict the contents of an image via url

Model<Concept> generalModel = client.getDefaultModels().generalModel();

PredictRequest<Concept> request = generalModel().predict().withInputs(
ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")))    
.executeSync();

List<ClarifaiOutput<Concept>> result1 = request.executeSync().get();

// Predict the contents of an image via image bytes
request = generalModel.predict().withInputs(
ClarifaiInput.forImage(new File("/home/user/image.jpeg")));

List<ClarifaiOutput<Concept>> results2 = request.executeSync().get();

//
// VIDEOS
//
// Predict the contents of a video via url

Model<Frame> generalVideoModel = client.getDefaultModels().generalVideoModel();

PredictRequest<Frame> videoRequest = generalVideoModel.predict().withInputs(
ClarifaiInput.forVideo("https://samples.clarifai.com/beer.mp4")
);

List<ClarifaiOutput<Frame>> videoResults = videoRequest.executeSync().get();

// Predict the contents of a video via local file

videoRequest = generalVideoModel.predict().withInputs(
ClarifaiInput.forVideo(new File("/home/user/video.mp4"))
);

List<ClarifaiOutput<Frame>> videoResults2 = videoRequest.executeSync().get();


Objective-C

Note that videos aren't available with this quite yet but they will be shortly!

For installation in ObjectiveC, add:

pod 'Clarifai'

to your Podfile first. Then install dependencies and generate workspace by: 

pod install

Now it’s time to open YOUR_PROJECT_NAME.xcworkspace to start coding!

v1 API

Import the client
#import "ClarifaiClient.h"

Instantiation
ClarifaiClient *client = [[ClarifaiClient alloc] initWithAppID:@"{clientId}"
                                                     appSecret:@"{clientSecret}"];

// Get tags for an image via url
NSString *imageAsUrl = @"https://samples.clarifai.com/metro-north.jpg";
[client recognizeURLs:@[imageAsUrl] completion:^(NSArray *results, NSError *error) {
  NSLog(@"results: %@", results);
}];

// Get tags for an image via local file
UIImage *image = [UIImage imageNamed:@"dress.jpg"];
NSData *imageAsJpeg = UIImageJPEGRepresentation(image, 0.9);
[client recognizeJpegs:@[imageAsJpeg] completion:^(NSArray *results, NSError *error) {
  NSLog(@"results: %@", results);
}];

v2 API

Import the client
#import "ClarifaiApp.h"

Instantiation
ClarifaiApp *app = [[ClarifaiApp alloc] initWithApiKey:@""];

//
// IMAGES
//

// Predict the contents of an image via url
ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
  [model predictOnImages:@[image] completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
    NSLog(@"outputs: %@", outputs);
  }];
}];

// Predict the contents of an image via image bytes
UIImage *image = [UIImage imageNamed:@"dress.jpg"];
ClarifaiImage *clarifaiImage = [[ClarifaiImage alloc] initWithImage:image];
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
  [model predictOnImages:@[clarifaiImage] completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
    NSLog(@"outputs: %@", outputs);
  }];
}];

//
// VIDEOS
//

Coming soon!!!


v1 vs. v2 Response

You must be wondering to yourself, "hmm...if the request formats are different, will the responses be different too?" The short answer, is yep! Make sure that you are accessing the outputs correctly by comparing the JSON formats in this section. 

v1 API

{
  "status_code": "OK",
  "status_msg": "All images in request have completed successfully. ",
  "meta": {
    "tag": {
      "timestamp": 1451945197.398036,
      "model": "general-v1.3",
      "config": "34fb1111b4d5f67cf1b8665ebc603704"
    }
  },
  "results": [
    {
      "docid": 15512461224882631443,
      "url": "https://samples.clarifai.com/metro-north.jpg",
      "status_code": "OK",
      "status_msg": "OK",
      "local_id": "",
      "result": {
        "tag": {
          "concept_ids": [
            "ai_HLmqFqBf",
            ...
          ],
          "classes": [
            "train",
            ...
          ],
          "probs": [
            0.9989112019538879,
            ...
          ]
        }
      },
      "docid_str": "31fdb2316ff87fb5d747554ba5267313"
    }
  ]
}

v2 API

The following format applies to the General, Apparel, Food, NSFW, Travel and Weddings models (a.k.a. the "Concept" models):

{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "ea68cac87c304b28a8046557062f34a0",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2016-11-22T16:50:25Z",
      "model": {
        "name": "general-v1.3",
        "id": "aaa03c23b3724a16a56b629203edc62c",
        "created_at": "2016-03-09T17:11:39Z",
        "app_id": null,
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept"
        },
        "model_version": {
          "id": "aa9ca48295b37401f8af92ad1af0d91d",
          "created_at": "2016-07-13T01:19:12Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "ea68cac87c304b28a8046557062f34a0",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      },
      "data": {
        "concepts": [
          {
            "id": "ai_HLmqFqBf",
            "name": "train",
            "app_id": null,
            "value": 0.9989112
          },
          ...        
          {
            "id": "ai_VSVscs9k",
            "name": "terminal",
            "app_id": null,
            "value": 0.9230834
          }
        ]
      }
    }
  ]
}

We also have models that return bounding box information and return a different response format. These include Celebrity, Demographics, Faces, Focus and Logos (a.k.a. our "Detection" models).

Here's their response format (logo in this example):

{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "bacb17a24bcc4541891e427acefaf8d1",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2017-04-25T15:33:08.306147Z",
      "model": {
        "name": "logo",
        "id": "c443119bf2ed4da98487520d01a0b1e3",
        "created_at": "2017-03-06T22:57:00.707216Z",
        "app_id": null,
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept",
          "type_ext": "detection"
        },
        "model_version": {
          "id": "ef1b7237d28b415f910ca343a9145e99",
          "created_at": "2017-03-06T22:57:05.625525Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "acb7e28f61d44e1a82600d4e22da30ac",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/logo.jpg"
          }
        }
      },
      "data": {
        "regions": [
          {
            "region_info": {
              "bounding_box": {
                "top_row": 0.48373735,
                "left_col": 0.5612386,
                "bottom_row": 0.61668295,
                "right_col": 0.63389087
              }
            },
            "data": {
              "concepts": [
                {
                  "id": "ai_TmWdpWdB",
                  "name": "Apple Inc",
                  "app_id": null,
                  "value": 0.7499092
                }
              ]
            }
          },
          {
            "region_info": {
              "bounding_box": {
                "top_row": 0.69582206,
                "left_col": 0.05707028,
                "bottom_row": 0.86856836,
                "right_col": 0.36279052
              }
            },
            "data": {
              "concepts": [
                {
                  "id": "ai_gtSs6kTH",
                  "name": "Chevrolet",
                  "app_id": null,
                  "value": 0.15019512
                }
              ]
            }
          },
          {
            "region_info": {
              "bounding_box": {
                "top_row": 0.6164421,
                "left_col": 0.8204964,
                "bottom_row": 0.7418648,
                "right_col": 0.8999992
              }
            },
            "data": {
              "concepts": [
                {
                  "id": "ai_JnNxrhnm",
                  "name": "CBS",
                  "app_id": null,
                  "value": 0.07974561
                }
              ]
            }
          }
        ]
      }
    }
  ]
}

That's it! Hopefully, this guide helped you transition your application to the Clarifai v2 API successfully. Please let us know if you run into any trouble by shooting us an email to support@clarifai.com.

Did this answer your question?