Role & duration
Research, information architecture, user flows, UI/UX design, interaction design, visual design, prototyping
Led the design process and collaborated with a product manager and developer
May 2023 - July 2023 (3 months)
Overview
Visual Search leverages AI technology to streamline the product identification process and reduce the need to train curators on a brand's product catalogue. This feature helped curators easily identify and tag products they have never seen before, leading to a 15% increase in their success rate.
How to create shoppable galleries
In order to create shoppable galleries that were tagged with products (as seen below), brands had to manually tag products by entering a keyword such as a product name or number. This required the user to have extensive knowledge of their brands catalogue. It was also very time consuming.
By integrating Scale AI into our product, we gave users the ability to visually find products by just identifying the product in the photo.
Technical constraints
The user must draw a bounding box around the product. This gets sent to the backend and gets compared to the product database.
Users have the option to enter a keyword.
We wanted this feature to work alongside the existing user flow
The current user flow and how AI will affect user inputs
When I was figuring out how to implement AI within an existing feature, I had to change up the existing user flow but reuse existing components. This was confusing for the user since they did not understand which flow they are in.
The image below shows how the bounding box is at the end of the existing flow, but with AI implemented, it would be at the very beginning. I addressed this "switch" in flow by adding a toggle that turns Visual Search on and off.
Changes to the UI to incorporate the AI
Below are a few additions to the UI to help guide the user through Visual Search with new AI functionalities.
Since the bounding box was the only required input from the user, it was the first step of the user flow and it was added to the bounding box for ease of access.
A Disabled state was added since a Bounding box was required and we needed to notify the user if they did not draw a box yet. This was to teach them from their original flow.
A toggle was added to switch between regular flow and AI flow
A submit button is required since we did not want the original debounce to overload the system with API calls. The user needed to be sure of each search before searching.
Improving the product selection flow
Once the user finishes their search, they need to select the product. Product details were placed into a card and grid format to compartmentalize each product, consistent with e-commerce standards. However, this would mean I would lose at least 50% in width for each product. In order to determine information hierarchy, I found some data points in Full Story and conducted qualitative research with current users. The statistics below show that most users only require the product image, name, and number
22%
Assign products with just product image, name and number
2%
Few brands use stock amount and subtitle
<1%
View regions and variants
Visually search through product catalogue
Shoot and search
Using this feature was simple: users just draw a bounding box around the product and click search. Also, to help with the slow loading times, I integrated skeleton screens Since we would integrate more heavy backend processes that require long loading times, I worked with a front end engineer to create the CSS for this and componentized it and added it to the design system to be used down the line as well.
Confirming product details
Users are provided high level details at a glance, with niche information tucked away in the bottom sheet. We kept the existing hover view and added semi important details like "Out of stock" or "SKU".
Success metrics
80% of users would use Visual Search as their first option even if they knew the product name and SKU. It was much easier than having to recall the SKU.
Brands are now able to hire curators more easily and do not need to train them on the product catalogue.
Number of products tagged increased by 3%.
If a user has never seen the product before, it could be identified 15% of the time.
Moving forward and what I would do differently
I would reevaluate the use of a toggle to turn on the AI functionality. One possible solution would be to completely separating out the AI experience so there is less confusion around whether the feature is on or off.
I would display the necessary details for minor edge cases in a more discrete way and not have a bottom slide out.
With the amount of tags within the post, I would explore using these tags as keywords to help the AI determine the product within the image.
I would increase the size of each product card so that the hover is not required.