|日本語

Introducing Citadel Lens for Images: Automated AI Testing and Synthetic Test Data Generation

Filed under:

Citadel AI is announcing the beta release of Citadel Lens for Images, our product for automatically testing image AI models and generating synthetic test data.  

Today, Citadel Lens is used by our customers to validate the reliability of tabular AI models, such as credit scoring systems, demand forecasting, and logistics optimization. With Citadel Lens for Images, we expand our capabilities into the vision domain, targeting AI systems for medical imaging, manufacturing visual inspection, content moderation, and more. 

Challenges when deploying image AI systems

Concerns about the reliability and trustworthiness of AI systems are growing worldwide, as highlighted in the recent EU AI Act and NIST AI Risk Management Framework. Citadel AI offers a concrete solution to the inevitable new risks of AI reliability.

It’s difficult for image AI systems to deal with irregular conditions, as such events rarely occur in training and test datasets. Preparing, training, and testing for a wide variety of edge cases is expensive and labor-intensive, requiring weeks or months of work.

As a result, there is a risk that an AI system will not function properly if, for example, a medical image is accurately classified in a brightly lit laboratory environment but mis-classified in a hospital with different lighting conditions, or if the position between the camera and the patient changes due to differing facility layouts. 

For image AI systems deployed in external environments, an additional risk of malfunctioning arises from dynamic weather conditions due to rain, fog, or snow, and other environmental changes. Moreover, even if the operating environment is fully controlled, AI models may exhibit inconsistent performance (bias) across different populations of data.

Test models 95% faster and improve AI reliability with Lens

Citadel Lens automatically identifies weak spots and reliability issues in AI models, generating Lens Model Reports to interactively drill down and debug performance issues. 

For example, Citadel Lens for Images will automatically generate synthetic test data based on real-world environmental conditions, such as changes in camera hardware, changes in the environment, and changes to the position of objects, allowing customers to immediately identify and improve areas of low model robustness. 

Additionally, the Lens Bias Finder automatically discovers model weak spots – segments of the dataset where a model has abnormally low performance. These data segments are based on combinations of human-annotated metadata and automatically extracted features. This makes it easy to slice AI performance across dimensions such as the location of the image, the data annotator, the brightness of the environment, the focus of the camera, and so on.

In addition to model errors, Lens can also detect data errors (mislabeled data points), helping AI developers to boost model quality through improving dataset quality. Lens also provides global and local explanations of model behavior to enable transparency and debugging.

Automated AI Robustness Testing for environmental changes
Interactively slice model performance by data segment
Visualize local and global model explanations

‍Citadel Lens can be applied to a wide range of AI applications, from systems based on tabular data, such as credit scoring systems, demand forecasting, and logistics optimization, to systems based on image data, such as medical imaging, manufacturing visual inspection, and content moderation. Since Lens natively integrates with major ML frameworks and data sources, it does not require modification of your models.  By automating synthetic data generation and model performance testing, Lens allows our customers to test models 95% faster* and achieve higher AI reliability. Freeing up this time allows ML engineers and data scientists to run more experiments, deploy more AI applications, and improve model performance to maximize the potential of AI. 

*Calculated in comparison with the conventional 10-50 hours for (B2) model development process described in METI’s “AI Guidebook” p. 20.

About Citadel AI

Citadel AI provides automated testing and monitoring products for AI applications, to help organizations minimize AI reliability risks, and maximize AI performance from research to deployment.

Citadel AI’s co-founder was the former product manager for ML infrastructure at Google Brain, and is currently leading Citadel AI’s product and engineering teams.

About Citadel AI

Representative DirectorHironori Kobayashi
HeadquartersShibuya-ku, Tokyo
EstablishmentDecember 10, 2020
Company URLhttps://www.citadel.co.jp
Twitterhttps://twitter.com/CitadelAI
Contact usinfo@citadel.co.jp

Get in Touch

Interested in a product demo or discussing how Citadel AI can improve your AI quality? Please reach out to us here or by email.

Related Articles