The Imperative of High Ethical Standards for AI in Business

Authors: Ryan Oskvarek | 01-October-2024

“With great power comes great responsibility”.  Derivations of this phrase have been with humanity since at least 1793.  Today, as we consider embedding AI into our businesses and lives, we need to remember that these AI systems have great power and even greater potential.  This blog post argues that companies must implement rigorous ethical standards for each AI system they deploy, not just as a moral imperative, but as a crucial business strategy.

The Power Asymmetry of AI

AI's transformative impact in business is characterized by a stark power asymmetry. Unlike human decision-makers, AI systems can process vast amounts of data, analyze patterns, and make decisions at an unprecedented speed and scale. For example, AI-driven models can assess millions of customer interactions, execute financial transactions, or screen job applications in mere seconds. The efficiency and speed of AI systems create a new layer of complexity and risk that businesses must navigate.

This capability creates an inherent power asymmetry between AI systems and the humans who rely on them. A single algorithm's decision-making power, unbounded by human limitations, can shape outcomes across entire markets, influence public sentiment, or make employment decisions—without the capacity for human empathy or ethical reasoning. Without careful oversight, AI can inadvertently perpetuate biases, propagate errors, and make decisions that adversely affect individuals and society.

A World Waking Up

Governments and regulatory bodies worldwide are increasingly aware of the potential risks posed by AI technologies and are taking decisive actions. According to the 2023 Stanford AI Index Report, "The number of AI-related regulations in the United States has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%" (Stanford HAI).

This surge in legislative activity reflects a growing acknowledgment of AI's profound impact on society. As of the 2024 legislative session, "at least 45 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI bills, and 31 states, Puerto Rico, and the Virgin Islands adopted resolutions or enacted legislation." Examples include comprehensive AI legislation in Colorado aimed at preventing algorithmic discrimination and enhancing transparency, and the creation of AI task forces and specific regulatory measures across other states (NCSL, 2024).

The Imperative for Higher Standards

The power of AI in business settings magnifies corporate responsibility. AI systems, if not designed, managed, and used ethically, can amplify existing biases or introduce new biases at an unprecedented scale. For instance, a widely cited example involves an AI recruitment tool developed by Amazon that favored male candidates over female ones because it was trained on biased historical data, illustrating how quickly biases can infiltrate decision-making processes (Dastin, 2018).

The complexity of AI also introduces a "black box" problem, where the reasoning behind an algorithm’s decision is often opaque. This lack of transparency can undermine trust, harm reputation, and expose companies to significant legal and financial risks. Moreover, studies have shown that AI-driven errors can lead to significant financial consequences, such as the flash crash in financial markets, where AI-driven trading algorithms exacerbated market volatility, leading to billions in losses in minutes (Johnson et al., 2013).

These risks highlight the need for companies to hold their AI systems to higher ethical standards than those applied to human employees. While human errors are often seen as forgivable, the scale and speed of AI errors can cause systemic harm, necessitating more stringent oversight.

Implementing Ethical Standards: Available Frameworks

Ethical AI refers to the development and use of AI in a manner that is fair, transparent, secure, and accountable, harmonizing innovation with the fundamental principles of justice and respect for individual dignity. Recognizing the need for ethical AI, several organizations have developed frameworks and standards to guide businesses. Two prominent examples are the IEEE CertifAIEd™ program and the NIST AI Risk Management Framework (RMF).

IEEE CertifAIEd™

The IEEE CertifAIEd™ program provides certification for AI systems based on rigorous ethical standards. It evaluates AI systems across several dimensions, including:

  • Transparency and explainability: Ensuring AI decisions are understandable.

  • Accountability: Assigning responsibility for AI outcomes.

  • Fairness and acceptable algorithmic bias: Mitigating biases that harm specific groups.

  • Privacy: Protecting sensitive data.

Certifications like IEEE CertifAIEd™ not only demonstrate a commitment to ethical AI but also help businesses comply with emerging legal standards.

NIST AI Risk Management Framework (RMF)

The NIST AI RMF provides a structured approach for managing AI-related risks. It covers the entire AI system lifecycle, emphasizing:

  • Mapping: Context is recognized and risks related to context are identified.

  • Measuring: Identified risks are assessed, analyzed, or tracked.
  • Managing: Risks are prioritized and acted upon based on a projected impact.

  • Governing: A culture of risk management is cultivated and present (NIST, 2024).

The NIST AI RMF is endorsed by the U.S. government, further emphasizing its relevance and utility for businesses. As part of the 2023 executive order on AI, the U.S. is pushing for broader adoption of this framework to establish national AI standards.

Implementing Ethical Standards Across Business Functions

Implementing AI ethically requires more than adopting frameworks; it demands an integrated approach to AI governance across the entire business. Governance frameworks like those offered by IEEE and NIST help ensure AI systems are ethically sound, transparent, and accountable at every stage of their lifecycle. Companies must establish robust AI governance structures to continuously monitor and manage ethical risks.

Call to Action

As AI reshapes the business landscape, high ethical standards must become the norm, not the exception. We urge companies to:

  • Conduct ethical audits using frameworks like IEEE CertifAIEd™ and NIST AI RMF.

  • Advocate for industry-wide ethical standards to set a level playing field.

  • Collaborate with ethics boards and experts to address emerging challenges.

  • Invest in ethical AI R&D to stay ahead of evolving risks.

Conclusion

The power of AI in business is undeniable, but with this power comes a responsibility to uphold the highest ethical standards. By implementing robust ethical frameworks, businesses can harness the full potential of AI while building trust, mitigating risks, and contributing to a future where AI truly serves the best interests of all stakeholders. The time to act is now—the ethical implementation of AI is not just a moral imperative but a crucial factor in long-term business success.

ZealStrat is committed to leading the charge in this crucial endeavor. Our AI Ethics services are designed to guide organizations in developing, deploying, and managing AI ethically and responsibly. With a team of IEEE Authorized Assessors, we offer comprehensive services, including AI Ethics Bootcamps, AI Risk Profile Creation, Documentation Discovery, Playbook Creation, and Preparation for IEEE Certification. We provide ongoing support to help clients maintain their ethical AI certifications and build trust with stakeholders. Rooted in industry standards and best practices, our approach helps organizations navigate the complex regulatory landscape, ensure compliance, and foster a culture of ethical AI governance.

Partner with ZealStrat to secure your place at the forefront of ethical AI innovation. For more information, contact us at contact@zealstrat.com.


References

  1. Stanford HAI. (2023). AI Index Report. The number of AI-related regulations in the United States sharply increases. Retrieved from https://aiindex.stanford.edu/report

  2. National Conference of State Legislatures (NCSL). (2024). Artificial Intelligence 2024 Legislation. In the 2024 legislative session, at least 45 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI bills, and 31 states or territories adopted resolutions or enacted legislation. Retrieved from https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation

  3. Dastin, J. (2018). Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. Reuters. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

  4. Johnson, N., Zhao, G., Hunsader, E., Meng, J., Ravindar, A., Carran, S., & Tivnan, B. (2013). Abrupt rise of new machine ecology beyond human response time. Scientific Reports, 3, Article 2627. Retrieved from https://www.nature.com/articles/srep02627

  5. National Institute of Standards and Technology (NIST). (2023). NIST AI RMF Playbook. Retrieved from https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook.