What programming languages do you officially support?
Visit our API overview to learn more about our supported languages.
Can I return results from two or more models at once via the API?
Yep! With our Workflows functionality you can call up to 5 models at once (public, custom or both). Check it out!
Do you accept base64 encoded data?
We do! Many of our v2 clients accept base64 encoded data for local files, though it isn’t always needed.
For example, JavaScript requires it for local files:
app.models.predict(Clarifai.GENERAL_MODEL, {base64: "G7p3m95uAl..."}).then(
function(response) {
},
function(err) {
}
);
Whereas Python gives you three options:
predict_by_base64(base64_bytes, lang=None)
predict_by_bytes(raw_bytes, lang=None)
predict_by_filename(filename, lang=None)
How do you return results back?
This actually depends on which client you are using. Our Python and JavaScript clients will always return a JSON format but our Java, C# and PHP clients return objects directly for easier use. If you're accessing our API via cURL you'll receive JSON as well like the former two cases.
Is there a way to set my token so that it doesn't expire?
Good news! Our API Keys never expire so you don't need to worry about that! Woohoo!
If you're using our older Token functionality for whatever reason you will want to switch to Keys as soon as possible. We're keeping them around for now but they will be deprecated at some point.
Do you have a Status Dashboard that I can check to see if everything is running smooth as butter?
We sure do! You can find this at http://status.clarifai.com and we'll try and keep that updated as much as possible.
I am getting throttled by the API. What's up with that?
Bummer. There are several things that can cause this:
- You hit a Community Tier limit (5,000 operations, 10,000 inputs, 10 concepts)
- You hit an Essential Tier limit (100,000 operations, 100,000 custom predictions, 100,000 inputs)
- You're a Business/Enterprise Tier User and you hit a pre-defined limit
- You're making more than 10 calls per second
- You never verified your email address with us and are hitting the 100 operation "Unverified Email Plan" limit
The first two scenarios are common ones, and the third means you're simply making too many calls at once. In the latter case, batching your requests in chunks of 32 would likely solve the issue, and it would also put less load on our servers :-)
If a throttle ever seems weird or incorrect feel free to reach out to us.
Can I send requests from multiple threads?
Our API has been designed to handle highly concurrent requests so you can totally do this - to an extent. Just note that batching is much faster if you're able to do so.
It's taking a really long time to get a response from the API. Why is that?
Well that's not good! Typically we take only 300-500 milliseconds to return an image response to you, so if you're seeing really long response times then something is up.
Possible causes of long response times include:
- Really large image files in the 4+ MB range
- Really large video files in the 80-90 MB range
- Files that are hosted on a server in Asia
- Inefficient code. Batch processing could likely help in this scenario
- Some server issues on our end (very infrequent)
- A slow internet connection on your end
Keep in mind that we need to download files in order to analyze them, so the bigger they are, the bigger the overall latency is going to be. And sometimes files hosted in far-away servers are just too tough for us to grab. :-/
We also down-scale all photos to 512 pixels in length so if you could send them in that ballpark it'd also help with latency.
How does a video response differ from an Image response?
The values returned for videos are similar to images except that you will get analysis for each second of video output, by default. Each second will be stored in brackets [ ] and you will be able to check them against a timestamp array.