Some Helpful Links:
User Interface in this article

Hello there! Our goal for this article is to use our Portal UI to build and train a custom model that can accurately categorize common aviation imagery, so let's get to it:

Custom concepts to be trained:

jet engine, window seat, cockpit, wing, and cityscape

Step 1: Create an account at

You'll need this to get access to our training platform, and when you verify your email address you'll get to create up to 10 custom concepts and store up to 10,000 training images! Definitely enough for what we're trying to do here.

Step 2: Edit your Application

API calls (operations) are tied to an account and application, and any model that you create and add images to will be contained within a specific application.

By default you'll have an application in your account already so let's change that one's name to whatever you'd like. I'm going to name mine "Aviation Stuff" for practical reasons.

Step 3: Explore the UI and add images to your application

Custom models are built by training on your own data, and they will be able to make predictions specific to your own unique content and context. To upload images you can either:

  1. Drag + drop image files from your computer, OR
  2. Paste in a list of URLs

Here's an example of both:

Step 4: Add some concepts and create a model

A model is created as soon as you create your first concept. The model name inherits the name of your Application (though you can always change that later).

Note that you'll have a model created on the left and 5 concepts under it in this particular example.

Step 5: Label your images with those concepts

Now that you have some concepts, let's add them to your images! All images are in your application are referred to as inputs so if you ever see that lingo, the names are essentially interchangeable. As you add concepts to images, you'll notice the counts in the Concept Panel on the left showing the updated total of how many images are labeled with each, respectively.

Note that you can also highlight a bunch of images at once by holding down the SHIFT key and then clicking on a range of them.

Step 6: Train the model!

Once all of your images are labeled, let's go ahead and train your first model! You can either do this via the little 3-dot menu next to the model name or you can click on the model name and then click the Train Model button on the ensuing page.

Step 7: Upload some external images and see how the model is performing

To test out our model we can simply upload an image that does not already exist in it, and then click on it to get predictions.

You can see from the example above that it is performing pretty well on the "cockpit" concept here.

Step 8: Adding negative examples when the model is getting confused

Up until this point we have only added positive examples to the model (e.g. saying "this is definitely concept X"), but we haven't added any negatives that say the opposite (i.e. "this is not concept X"). A robust and well performing concept is typically made up of both positive and negative examples with a 3 or 4:1 ratio, respectively, but adding too many will be counterproductive so be careful.

If you see in one instance that you're getting a false positive for "jet engine", for example, you may want to teach the system that it's wrong. In the example below a sink drain is getting labeled as a jet engine with 95% accuracy so we assign that as a Negative.

All you have to do is re-train the model after that (also in the video) and it will take that feedback into consideration for the future. 

Note that this is just an example to show confusion, and if there is little to no chance of a sink drain ever being predicted on your aviation model, you probably don't need to worry about them. It will often make more sense to add Negatives to existing images in your training set that are not predicting well.

Step 9: Adding additional concepts to your model (optional)

We've already seen how to add concepts individually on the left-hand panel of your application, but you can also add them by selecting a few inputs and then adding a new one on the fly. Here we add a new "front-facing engine" concept to our model and then do an immediate train to incorporate it into our predictions:

Additional Features that may be useful:

Adding positive and negative examples directly to a concept via its file menu:

A quick and easy way to add positive and negative examples for a particular concept is to click the menu button next to it and select "View Details".

...which leads to this screen:

Here in the concept detail view you'll be able to see:

  1. The creation date of your concept
  2. Samples of positive examples
  3. Samples of negative examples

Adding examples of both kinds can be done by clicking the "Add Positives" or "Add Negatives" links in the view above, which will let you paste a large list (32 max) of URLs or local files.

Viewing all of a concept's positive or negative examples in the gallery view

If you go to a concept's menu and click on the "Filter Positives" or "Filter Negatives" options, the resulting gallery view will filter by those, respectively. You'll also notice that the concept that you ran this on will have a little green plus or minus next to it.

Positive "front-facing engine":

Negative "jet engine":

Searching by concept

To perform a visual search by a concept, simply click on the concept name in the lefthand panel. When searching by concept, results of that query are returned in order of visual similarity + your model's predictions on them.

Searching by concept is a powerful way to gauge how your concept prediction is performing. It also will surface other images in your application that may be useful to serve as positive examples (like the one of the cockpit that we tested earlier, but isn't labeled yet in the video above).

Searching by your custom concepts and general model concepts.

Our General Model consists of over 11,000 objects, scenes and terms that Clarifai has built to provide widespread understanding of the world, and it will output alongside each image under your custom concept predictions. You can also perform a visual search with custom + general model concepts (like "water") as seen in the video below:

Note that after "water" is applied it filters the images further by a combination of that and "cityscape".

If you have any other questions/comments/feedback on our Custom Training platform, please email We'd love to hear from you! 👍 

Did this answer your question?