How does Video analysis differ from Image analysis?
Great question! By default videos get analyzed per second of footage, essentially returning a group of Images to you in JSON format. You'll receive an array of timestamps that correspond to each set of tags and probabilities.
What video formats do you support?
Currently we support .AVI, .MP4, .WMV, .MOV, .GIF and .3GPP files. Note that GIFs are treated as videos even if they don't have any animation.
If you use a format that isn't listed here you'll receive this:
31101 - Input video format unsupported
Can I change that frame-rate so it only gives me tags once every X seconds?
Not quite yet, but we'll be introducing this very soon! Stay tuned.
Is there a limit on video file size?
Unfortunately, yes. A video uploaded through URL has a limit of 80MB or 10mins in length and a local file has a limit of 10MB.
If your video exceeds this size, please follow our tutorial on how to break that sucker up into smaller components and then send those into the Video API. Otherwise, our server will get cranky and will give you an error, and he does not like being cranky.
Can videos analyze all the public models on the Clarifai platform?
Right now our video API can utilize the following models:
- General (11,000 broad concepts)
- NSFW (content moderation)
- Food (self-explanatory)
- Travel (travel and hospitality-related concepts)
- Weddings (also self-explanatory)
- Apparel (a few hundred clothing items)
We're hoping to have the other ones on there in the very-near future though!
And what about Custom Training and Visual Search? Can videos be used for that?
Unfortunately this isn't quite available yet either. If you want to use CT or VS with videos, the best workaround for now is to split the clips into frames and upload them as images.
Is this pricing the same as image analysis?
Yep! The only difference is that a video will consume operations on a per-second basis.
Can all of your API clients analyze videos?