Artificial intelligence (AI) is not a philosophical discussion, business dream, or science fiction for CPAs—whether they are risk managers, auditors, or financial executives. Unlike many business development or operational executives who dream of expanding opportunities and efficiencies, CPAs and other financial professionals recognize and respect the accompanying threats that could just as quickly hinder the organization’s survival and stakeholder value. The concern is not so much the macro threats that impact society at large, over which an organization has little control, but the micro threats that require the navigation of governance and management to facilitate the achievement of organizational objectives. Like previous introductions of new technologies, the pressure of individual business lines to execute and potentially obtain new gains may exceed the prudence of managing enterprise-wide risks and protecting existing stakeholder value. And given CPAs’ responsibilities, a process to manage and audit these risks is needed.

Many CPAs become quickly overwhelmed when comes up. Some are overwhelmed by media reports of displacement of their jobs. Others believe that the accounting professional’s role in AI is limited to tools that perform traditional accounting-related tasks. Fortunately, dedicated practitioners recognize the opportunities and try to remain relevant.

Although many CPAs recognize their obligations in terms of professional competency, many do not adequately prepare themselves to make the most of their AI-related engagements. One place to start is this column, which previously provided an analysis of accounting-related publications to help readers better understand how AI can impact their jobs and careers (“Artificial Intelligence: Evolving Risk Guidance and Considerations,” https://www.nysscpa.org/2309-jl).

It’s not a question of engaging outside expertise but rather having an executive’s understanding of high-level threats and potential controls to mitigate them. This understanding is the minimum needed to remain engaged with decision-makers and direct and oversee the activities of AI specialists. The following represent common “worst practices” in providing AI-related services.

Failure to Use Existing Organizational Governance Practices and Policies

While AI is a relatively new technology for the enterprise, an organization’s existing governance and policies still apply. Even if the term AI is missing from the documentation, a satisfactory governance process should provide for emerging technology issues. Governance is more than getting leadership’s permission; it is about managing risk by understanding the impact on the organization, fulfilling regulatory obligations, appreciating the effect of vendors, and meeting stakeholder expectations that continue to matter.

Suppose accommodations for AI are needed and justified. In that case, a robust current exception process should ensure that those with governance responsibilities become aware of deviations from previously agreed-upon behavior and practices and approve them if appropriate.

Not Obtaining a Core Understanding of AI

Some professionals rely too heavily on the media to learn about AI and its risks. Another challenge is the risk of information overflow. This can result in decision-makers not obtaining the knowledge and understanding needed to converse with, and challenge, advisors who are more familiar with the details. The challenge CPAs face is getting a trustworthy perspective as quickly as possible to jumpstart their education and research.

Many accounting and advisory firms publish whitepapers discussing AI and its risks. While these resources can be valuable, they may not be updated regularly or not have critical nuances identified. Practitioners should consult the United Kingdom’s National Cyber Security Centre article, “AI and Cyber Security: What You Need to Know” (https://tinyurl.com/4f4xt7d8), which is reliable and to the point, emphasizing key risk considerations for organizations at a high level.

Neglecting Financial Statement Implications

It’s easy to forget that AI does impact financial statement reporting. Unfortunately, some CPAs believe AI is all about increasing revenue or decreasing cost. The Center for Audit Quality’s “Emerging Technologies, Risks, and the Auditor’s Focus” (https://tinyurl.com/bdf5tnya), although not explicitly written for AI, does cover timeless risks including access privileges, erroneous changes, third-party oversight, change management, cybersecurity, and data reliability. Ironically, the risks themselves are similar, no matter the technology. It’s how they are managed that is different.

Neglecting to Consider a Recognized AI Risk Management Framework

When one considers that AI is still in its infancy, the number of consultants proclaiming themselves AI experts is quite alarming. The question is, where do those who do the advising go to obtain their understanding of AI risk? The Nationals Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (NIST AI 100-1, https://tinyurl.com/mwfzndb5) is quickly becoming a critical professional reference in understanding the practices expected to manage AI risk. According to the executive summary, “The Framework is designed to equip organizations and individuals—referred to here as AI actors—with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time” (p. 2). CPAs should leverage the contents in facilitating discussions and challenging line executives on identifying and managing risks.

Ignoring Industry-Specific AI Risks and Challenges

Many, if not all, industries are preparing to evolve with AI developments. Each sector has its own opportunities and threats. While standard AI tools are widely used, the shared implementation experiences within specific industries offer valuable lessons. Information Sharing and Analysis Centers (ISAC) help critical infrastructure owners and operators protect their facilities, personnel, and customers from cyber and physical security threats and other hazards. ISACs collect, analyze, and disseminate actionable threat information to their members and provide members with tools to mitigate risks and enhance resiliency (https://www.nationalisacs.org/about-isacs). For example, the financial services industry ISAC, FS-ISAC, “released six white papers designed to help financial services institutions understand the threats, risks, and responsible use cases of artificial intelligence (AI). The papers are the first of their kind to provide standards and guidance curated specifically for the financial services industry. They provide additive resources that build on the expertise of government agencies, standards bodies, academic researchers, financial services partners including FSSCC and BPI/BITS, and NIST’s AI Risk Management Framework” (https://www.fsisac.com/newsroom/pr-ai-risk-papers).

Not Obtaining a Real-World Understanding on Risk Management Challenges Faced

Many recognize the financial services industry as a leader in implementing AI. CPAs in all sectors could gain insight into how AI could impact their industry by understanding experiences from this sector. One would generally expect numerous cases of successful and profitable implementation of AI projects. Although this may happen in the future, the industry is not there yet. In March 2024, The U.S. Treasury Department issued a report focusing on the state of AI-related threats in financial services (“Managing Artificial Intelligence Specific Cybersecurity Risks in the Financial Services Sector,” https://home.treasury.gov/news/press-releases/jy2212). Its uniqueness is that “the report’s findings are based on 42 in-depth interviews conducted in late 2023. The interview participants include representatives from the financial services sector, information technology (IT) firms, data providers, and anti-fraud/anti-money laundering (AML) companies.” Among its key findings is a list of best practices used in the industry to manage AI risk.

Limiting AI Knowledge to Practitioner-Related Tools

Much has already been written about AI and auditors, from investment analysts to public media, including industry and professional groups. Much of the former guidance is geared toward the possibilities of AI within the profession, and the latter toward the techniques to automate practices. These are important. But clients care about results. Yet, when placed in the position of advising internal or external clients on managing business-related risks, available guidance—although it exists—can be limited. As implementations begin, the profession is starting to understand the types of controls needed in some situations to mitigate risks.

Forgetting the Vendor Risks Involved

Most organizations rely on third parties to develop and initiate AI programs. In addition to overseeing issues related to non-AI activities, AI has also introduced new vendor risk management program challenges. Some of the unique considerations include the development of algorithms, data used for model development, data storage, operations and performance of AI, and monitoring expected results.

The most important part is to execute on the fundamentals. No matter the technology, many risks are realized due to management’s failure to perform basic control processes effectively. Understanding the environment, assessing risk, ensuring controls properly function, getting the correct information, and monitoring continue to be classic challenges that continue to play a significant role in taking advantage of the business opportunities enabled by new technologies.

Joel Lanz, CPA, CISA, CISM, CISSP, CFE, is a lecturer at SUNY–Old Westbury and an adjunct professor at NYU-Stern School of Business, New York, N.Y. He provides infosec advisory services through Joel Lanz, CPA, P.C., Jericho, N.Y. He is a member of The CPA Journal Editorial Advisory Board.

Leave a Reply

Your email address will not be published. Required fields are marked *