ISO 42001

Partner_logo
ISO 42001

What is it?

ISO/IEC 42001 is an international standard that specifies the requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS) within organizations.

It is aimed at companies that provide or use AI-based products or services, ensuring responsible and safe use of AI technologies.

Main objectives of ISO/IEC 42001:
  • Risk Management – Helps organizations manage the risks associated with using AI.
  • Transparency and accountability : Promotes transparent and responsible use of AI.
  • Data Quality: Highlights the importance of data quality for training and testing AI systems.
  • Fairness and Safety : Addresses fairness and safety in the use of AI.
  • Regulatory compliance : Aligned with other established management standards, such as ISO/IEC 27001 and ISO 9001.
This standard is the first of its kind worldwide and provides comprehensive guidance for the responsible use of AI, even as the technology rapidly evolves.

Key points

  • Responsibilities : Clearly define the roles and responsibilities of those responsible for the AI system.
  • Fairness : Addresses fairness in the use of AI, ensuring that automated decisions are not unjust.
  • Data Quality – Highlights the importance of data quality for training and testing AI systems.
  • Environmental Impact : Consider the positive and negative environmental impacts of using AI.
  • Maintainability : Ensures the ability to manage changes to the AI system to correct defects or adapt to new requirements.
This standard is designed to establish, implement, maintain and improve an AI management system within organizations, promoting a risk-based approach and ensuring responsible and sustainable use of AI.

Advantages

  • Ethical Use of AI : Promotes responsible and ethical use of AI technologies, ensuring respect for human rights and adherence to ethical guidelines.
  • Safety and Reliability : Ensures that AI systems are safe, reliable, and perform as intended through rigorous testing and validation processes.
  • Risk Management : Helps organizations address AI-related risks, such as data privacy and algorithmic bias.
  • Transparency : Promotes transparent and trustworthy use of AI, increasing trust among stakeholders.
  • Regulatory Compliance : Aligned with the EU Artificial Intelligence Act, making it easier to comply with current regulations.

AI Act Highlights

The AI Act EU Regulation 2024/1689 is the first regulation on artificial intelligence ever passed in the world.

Here are some of the highlights:
  • Risk-based approach : The regulation adopts a risk-based approach, where the rules and precautions vary according to the risk level of the AI system.
  • Ban on dangerous practices : Bans certain AI practices considered dangerous, such as facial recognition in public areas and the use of systems that analyze emotions.
  • Transparency and accountability : Requires AI systems to be transparent and operators to be accountable for their decisions.
  • Innovation support : encourages innovation, particularly for small and medium-sized enterprises (SMEs) and startups, by providing experimental spaces and mechanisms.
  • Harmonisation of rules : introduces harmonised rules for the placing on the market, putting into service and use of AI systems in the European Union.
  • Monitoring and Surveillance: establishes rules for market monitoring, governance and enforcement of market surveillance.
These points aim to ensure that AI systems are safe, respectful of fundamental rights and transparent, while promoting innovation and competitiveness of the EU.

Would you like to have more informations?

Contact us

Download - Documents and pdf

Fill out the form below in order to access the resource you requested

Fields marked with an asterisk (*) are required