We're building toward a GPT3 level moment in computer vision, and here's our V0.
It's called Carrot. Request access here:
We are starting with a Visual Question-Answer model, and plan to expand its capabilities to be increasingly general purpose over time as we build in common CV features and upscale the quantity of parameters.
This is a hybrid of vision and language models, which can extract semantic meaning from images and query against it using natural English. This v0 runs on 13B parameters, with 18B and 34B model iterations coming in the pipeline.
The API is in beta, so jump into the waitlist linked above to get early access.