C TEC Comment Zero Draft TEVV
Michael Richards
Executive Director, Policy, U.S. Chamber of Commerce Technology Engagement Center (C_TEC)
Published
September 12, 2025
The U.S. Chamber of Commerce (“the Chamber”) appreciates the opportunity to submit comments to the National Institute of Standards and Technology (NIST) regarding its Outline: Proposed Zero Draft for a Standard on AI Testing, Evaluation, Verification, and Validation (TEVV).
The Chamber commends NIST’s commitment to developing science-based, stakeholder-informed standards through its “zero draft” process. This approach reflects a thoughtful and inclusive methodology that can lead to the creation of robust, voluntary standards grounded in industry expertise and practical implementation.
As outlined in the Chamber’s AI Principles[1], we strongly support the development of industry-led, consensus-based standards as a cornerstone of responsible digital innovation. Voluntary standards provide the flexibility needed to accommodate rapid technological advancement while ensuring accountability and public trust.
Further, the Chamber’s AI Commission Report [2]emphasizes the critical role of soft law mechanisms—such as standards and best practices—in shaping ethical and effective AI governance. These tools enable sector-specific guidance, foster innovation, and support global competitiveness. NIST’s leadership in convening government, industry to co-develop these pre-standards is essential to maintaining U.S. leadership in AI.
In this spirit, the Chamber offers the following comments and recommendations on the proposed zero draft to help ensure the framework is practical, scalable, and aligned with real-world needs.
General Comments:
1. Prioritize Adversarial Evaluation: We urge the draft to elevate adversarial evaluation—including persona-based “red teaming”—as a foundational component rather than an optional appendix. Embedding adversarial testing at the core of the evaluation process is essential for uncovering vulnerabilities that may not surface through conventional assurance methods.
2. Include Guidance on Purple Teaming: The draft should incorporate “purple teaming,” which integrates offensive (red team) and defensive (blue team) security functions. This collaborative approach ensures that identified risks are not only detected but also effectively mitigated in real-world operational contexts.
3. Expand Coverage of Agentic and Autonomous AI Systems: We recommend a more robust discussion of agentic or autonomous AI systems, which can operate independently of human oversight. These systems present unique safety and security challenges that may merit dedicated guidance within the framework.
4. Continuous Monitoring: To ensure ongoing safety and reliability, the framework should support continuous monitoring of AI systems post-deployment. Continuous monitoring is critical for detecting emergent behaviors and adapting to evolving threats.
5. Introduce a Severity and Risk Classification: A standardized classification system for severity and risk would enable organizations to triage findings effectively and respond proportionally. This would enhance consistency across implementations and support more efficient resource allocation.
6. Incorporate Model Provenance and Supply Chain Verification: The draft should address upstream risks by including requirements for model provenance and supply chain verification. Understanding the origin and integrity of models prior to deployment is important for managing inherited vulnerabilities.
7. Refine Conceptual Mapping of Evaluation Intent and Threat Models: We recommend clarifying the framework’s concept map to distinguish between the evaluator’s intent (e.g., assurance vs. adversarial testing) and the system’s threat model. This distinction will help practitioners align evaluation strategies with specific risk profiles.
8. Provide Sector-Specific Examples and References: To facilitate practical implementation, the framework should include sector-specific examples and references such as MITRE ATLAS and OWASP ML Top 10. These resources offer actionable insights that can guide practitioners in applying the draft and future framework within their respective domains.
Conclusion:
The Chamber appreciates NIST’s leadership in advancing trustworthy AI through collaborative, science-based work to drive further standards development. As AI technologies continue to evolve and permeate every sector of the economy, it is imperative that testing, evaluation, verification, and validation frameworks remain agile, risk-aware, and grounded in operational realities. By incorporating the recommendations outlined above, NIST can help outline a standard that not only reflects technical rigor but also supports innovation, competitiveness, and public confidence in AI systems. We look forward to continued engagement and stand ready to support NIST in this important endeavor.
Sincerely,
Michael Richards
Executive Director
Chamber Technology Engagement Center
U.S. Chamber of Commerce
[1]U.S. Chamber of Commerce. “U.S. Chamber Releases Artificial Intelligence Principles.” U.S. Chamber of Commerce, 23 Sept. 2019, https://www.uschamber.com/regulations/us-chamber-releases-artificial-intelligence-principles.
[2]U.S. Chamber of Commerce.Artificial Intelligence Commission Report. 9 Mar. 2023, https://www.uschamber.com/technology/artificial-intelligence/artificial-intelligence-commission-report.
C TEC Comment Zero Draft TEVV
About the author

Michael Richards
Michael Richards is the executive director of policy at the Chamber's Center for Technology Engagement.



