imgix: A New Step Forward

Nowadays, you can take out your phone, open a user-friendly app, capture a shockingly high-quality picture, make any corrections you need, and then send that picture to your grandma, who can see it seconds later. Consumer photography is a largely solved problem. However, the same innovations that have benefited consumers - access to high quality cameras, ease of use, rapid distribution of photos - have created a multitude of complexities for businesses who are now forced to deal with both a more discerning user and a more complicated visual landscape. Delivering the highest quality, lowest latency image to your users is getting progressively harder over time as more and more factors play a role in what delivering the “perfect” image means. Nobody should have to become an imaging scientist to achieve the most with their images. We built imgix to solve this.

By providing a simple URL API and tools, we have empowered developers and businesses to serve their existing static images dynamically, in response to the various device types, network conditions, art direction requirements, and browser capabilities. To date, we have transformed and delivered well over 1 trillion images for our customers and see tens to hundreds of thousands of image requests per second, every second of every day. As a team, we are so incredibly proud of this accomplishment. And yet, we recognize that there is a lot of work still to do before we can confidently claim to be developing all of the world’s images. With that as a goal in front of us, we realized that we need to begin expanding how imgix fundamentally works to meet the evolving needs of our customers and to bring forward the innovative features we have been imagining internally for a long time. That is why today, we are introducing a new tool that will mark the starting off point for a much larger reconception of what imgix is and how imgix works. I wanted to take a moment and explain what we are doing from the developer perspective so that you know what to expect from us.

Since the beginning of imgix, we have statelessly processed images as they are requested. You connect your existing image buckets to us and we provide you the API to access those images, enabling us to transform them on-demand and deliver them over a CDN. All of this processing is done in real-time and is heavily reliant on intelligent caching at multiple layers of our infrastructure. While this approach has been monumental to our success, it does create a number of limitations. First, every decision we make about an individual image has to be measured and made during the <100ms we have to process that image. Without state, the output of every analysis we need to perform needs to fit within this window, preventing us from performing many advanced operations. Next, we analyze and learn from the images requests that navigate their way through our stack and use those findings to hone our algorithms, but only at a global level. Without state, we cannot easily optimize our algorithms at the per-image level. This means that we cannot separately tailor our rendering output for images used on a dating site versus an e-commerce site, for instance. Finally, we want to enable you, the content owner, to provide more context about your images so that we can deliver them even better for you. You know your images better than we ever will and involving you in the imaging process will only help you achieve more with your images by using our service. Without state though, there would be nowhere to capture these insights you might want to provide us.

So the answer is, we are adding state to imgix, which will enable an entirely new class of features, functionality, and tools. Going forward, each source will have a database of image data associated with it. It will store all of the data about your existing images and give you the ability to add, edit or delete that data to your specification. Each image will have its own record in the database and that record will exist in sync with the image object in your bucket. You will get to choose how your images are added to this database, whether it be by having us spider your bucket all or in part, by having us add records based on which images are accessed through our service, or manually via API (coming soon). By default, images that are added to the database are automatically pushed through content detection (is this actually a JPEG or GIF?), image analysis (what colorspace and bit depth is this image?), content warning analysis (is this image acceptable for my site?) and machine-learned-tagging operations (what is actually in this image?). Soon, you will have the ability to configure the amount of analysis you want as well. As a user of your image database, you will be able to edit the image records to your liking. You can add categories, create and store custom fields, and set many other properties we have yet to announce. You can also edit or add to various analyzed properties in a non-destructive way. For instance, if you notice that a tag added by our machine learning is not quite right, you can delete the tag and it will not come back on future refreshes by the machine-learning analyzer. Similarly, if you add a tag, it will never be overwritten by the analyzer either. In fact, your edits to the tags will help us eventually improve the machine learning over time.

Today, we are taking the first step by announcing the invite-only beta of our first tool for managing the data associated with your images – Image Manager. It is a sophisticated UI we created to help you view, search, and edit your image data. We have been using it internally for a few months now and it has become an invaluable staple in our daily activities. Beyond any of our future plans with it, straight out of the box it is the single best UI for navigating an existing S3 or GCS bucket any of us have ever worked with. Over the next few months, we will be expanding how you can access the data associated with your images programmatically by building a dependable and robust API for it and from there, building out the various integrations and SDKs that will make the lives of developers even easier. Further roadmap details and pricing updates will be discussed as we move forward.

Right now we are focused on getting this first step right for our customers and we need your help to do that. We are looking to work with early adopters who can help us improve the technology and provide feedback about how they are using it. If that sounds like you, then please reach out to signup for the beta.

 
64
Kudos
 
64
Kudos

Now read this

The Imaging Chain and the Internet

Consider what taking a simple photograph meant 25 years ago. You had film cameras, negatives, dark rooms, emulsions, and paper. It would take days to produce a picture you could show to someone. Now consider what taking a photograph... Continue →