Citadel AI is excited to announce the beta release of Citadel Lens, an AI testing tool to accelerate the path from PoC to production.
Many companies are stuck in the proof-of-concept stage when developing AI applications. To solve this, Citadel Lens automatically detects AI problems during development: low-performing customer segments, backward incompatibilities, model robustness, and more.
Lens can be run manually from a web UI or automatically in a CI/CD pipeline, helping you to deploy AI to production with confidence.
Unlike traditional software, the accuracy of AI software quickly deteriorates in a real-world, rapidly changing environment. As society increasingly relies on automated systems to make critical decisions, it’s essential for AI owners to have quality management tools in place.
AI systems are frequently retrained, which create new model versions that can have divergent predictions and accuracy (on the same data). Maintaining consistent quality across versions is important, but it’s a major time sink for ML teams to repeatedly measure and debug these quality issues.
After deployment, real-world serving data can cause further problems. Unlike the clean-room environment of training data during model development, real-world data often includes invalid, unknown, or skewed data points, which cause incorrect predictions and low performance.
Identifying these serving data problems can be both expensive and time-consuming. Metrics like accuracy, which are easy to measure at training time, require manually labeling data points at serving time, which take weeks or months to provide feedback.
To successfully deploy AI applications, you need to build quality checks across the end-to-end ML lifecycle.
We offer two easy-to-use products that improve reliability at different steps of the ML lifecycle: Citadel Radar (serving time) and Citadel Lens (training time). Our products can be used as a hosted SaaS service, in a private cloud, or on-premise.
Today, we are releasing the beta version of Citadel Lens, an AI testing tool that accelerates the path from PoC to production. Citadel Lens automatically detects AI problems during development: low-performing customer segments, backward incompatibilities, model robustness, and more.
Lens Model Reports help ML teams to quickly fix performance problems, and enforce a consistent quality bar for all AI projects across a company. Lens integrates with all major ML model formats (including scikit-learn, Keras, PyTorch, PyCaret, XGBoost, LightGBM), and can be run manually or automatically in a CI/CD pipeline.
Citadel Lens can be combined with Citadel Radar, which provides automated data and model monitoring, to achieve end-to-end "reliable and explainable AI" – helping you deploy more successful AI applications, and improve the productivity of ML teams.
Citadel AI is based in Tokyo, and raised a seed round from UTokyo IPC and ANRI in 2021. In January 2022, the company won the INTRO Showcase Grand Prize at BRIDGE Tokyo 2022, chosen by a panel of VCs to lead the next generation.
We’re actively recruiting talented software engineers to join our team – apply here!