LAST UPDATE – January 21st, 2026
We recognize that Artificial Intelligence (AI) has the potential to deliver significant benefits for our customers, employees, and society. At the same time, AI systems can introduce ethical, legal, and operational risks if not designed and used responsibly.
We are committed to developing, deploying, and using AI in a manner that is ethical, transparent, secure, and human-centric. Our approach to AI is aligned with internationally recognized AI governance principles and standards, including ISO/IEC 42001 (Artificial Intelligence Management Systems), as well as applicable laws and regulations.
This policy outlines the principles that guide our responsible use of AI and our commitment to maintaining trust with all stakeholders.
This policy applies to:
It covers all forms of AI, including generative AI, machine learning, predictive analytics, and automated or semi-automated decision-making systems.
We aim to be transparent about the use of AI where it materially affects individuals or business outcomes. Where appropriate, we provide meaningful explanations of how AI systems contribute to decisions, including their intended purpose and limitations.
AI is designed to support human judgment, not replace it. Clear accountability is maintained for AI systems, and appropriate human oversight is applied, particularly for high-impact or high-risk use cases.
We strive to prevent bias and unintended discrimination in AI systems. Reasonable measures are taken to assess training data, model behavior, and outcomes to promote equitable and fair results.
We respect individual privacy and handle data responsibly. AI systems are designed and operated in accordance with applicable data protection and privacy laws, using privacy-by-design and data-minimization principles. Detailed guidelines for the collection, storage and handling of data are detailed in our Privacy Policy.
We implement appropriate technical and organizational safeguards to protect AI systems and data against unauthorized access, misuse, or manipulation. AI systems are designed to operate reliably and safely within their intended context.
We do not use AI in ways that violate human rights, democratic values, or applicable laws. AI systems must not be designed or used to intentionally mislead, manipulate, exploit vulnerable groups, or cause harm.
We consider the broader societal and environmental impacts of AI and seek to use AI in ways that contribute positively to sustainable growth and social well-being.
AI may be used to:
We do not knowingly use AI systems that:
Data used in AI systems is governed to ensure:
When working with third-party AI providers, we expect appropriate safeguards and responsible data handling practices consistent with our standards. Our customers appreciate the emphasis on data security and our ability to continuously maintain a valid ISO 27001 certification status.
We apply a risk-based approach to AI across its lifecycle, from design and development through deployment and ongoing use. This includes:
Our AI practices are designed to comply with applicable global and local laws and regulations, including those related to data protection, consumer protection, and emerging AI regulation.
We align our AI governance with internationally recognized standards and frameworks, including ISO/IEC 42001, to support consistent, auditable, and responsible AI management.
We aim to build and maintain trust by:
AI technologies and regulations continue to evolve. We regularly review and improve our AI practices to ensure they remain effective, responsible, and aligned with legal, ethical, and societal expectations.
Solutions
Industries
News and Events
Company
Celestial respects your privacy. No spam!