Skip links

What Every CPA Needs to Know When Advising Nonprofits in the Age of Artificial Intelligence

by Martin Medeiros, Buckley Law, P.C.

As artificial intelligence reshapes the operational and financial landscape of virtually every sector, nonprofit organizations are no exception. From AI-powered donor outreach platforms to automated grant management systems and machine-learning-driven financial forecasting tools, nonprofits are adopting AI at a rapid pace — often without fully understanding the legal and regulatory risks that follow. For CPAs advising these organizations, the stakes have never been higher.

This article surveys the key legal frameworks — tax law, fiduciary duty, employment law, data privacy, and emerging AI-specific regulation — that CPAs must understand when serving nonprofit clients. As an advisor to CPAs, bookkeepers and non-profits in Oregon and Washington, I write this overview not to alarm CPAs, but to equip you: the CPA who understands these intersections will become an indispensable advisor to the nonprofit community.

I. The Foundation: Tax-Exempt Status and the Risk of “Private Benefit” Through AI

The cornerstone of nonprofit law for CPAs is Internal Revenue Code IRC § 501(c)(3), which exempts organizations from federal income tax when they are organized and operated exclusively for charitable, religious, educational, or other qualifying purposes. The “private benefit” doctrine — rooted in Treas. Reg. § 1.501(c)(3)-1(d)(1)(ii) — prohibits a 501(c)(3) from operating for the benefit of private interests, even incidentally.

If you aren’t buying the product, you are the product.  Here is where AI introduces a novel and underappreciated risk: many AI tools licensed by nonprofits generate data that flows back to the vendor, training and improving the vendor’s commercial models. The IRS has not yet issued formal guidance specifically addressing this, but CPAs should analyze whether:

  • The nonprofit is effectively providing a commercial benefit to a for-profit AI vendor in exchange for discounted or donated software (raising both unrelated business taxable income and private benefit concerns);
  • Grant funds are being used to pay for AI tools whose primary beneficiary is the AI company’s model refinement rather than the charitable mission.

The IRS’s general framework for private benefit analysis — set out in American Campaign Academy v. Commissioner, 92 T.C. 1053 (1989) — remains instructive: courts will look at whether private benefits are “incidental in both a qualitative and quantitative sense.” CPAs advising nonprofits should help them document that AI tool use is mission-driven and not primarily serving the vendor’s commercial interests.

II. Unrelated Business Taxable Income (UBTI) and AI-Monetized Data

IRC § 511–514 impose a tax on unrelated business taxable income — income from a trade or business regularly carried on by a nonprofit that is not substantially related to its exempt purpose. The rise of AI creates a new UBTI pressure point that very few nonprofits are currently thinking about: data monetization. This can include business list sales or licensing as well as donor behavior data.

Some AI platform agreements — buried in pages of terms of use — effectively allow vendors to use nonprofits’ operational data (donor patterns, service recipient demographics, program outcome data) to train and commercialize their models. If a nonprofit enters into an agreement where it receives compensation or reduced fees in exchange for its data, the IRS could characterize this as a revenue-generating transaction subject to UBTI analysis under IRC § 513.

Even more concerning: if a nonprofit actively licenses its own AI-generated content or insights to third parties, that revenue stream may not be substantially related to its charitable purpose, triggering UBTI exposure. CPAs should conduct a careful UBTI analysis any time a nonprofit client is:

  • Licensing proprietary AI tools it has developed;
  • Selling AI-generated reports or research to commercial entities;
  • Receiving discounted services in exchange for data.

The foundational case here remains Rensselaer Polytechnic Institute v. Commissioner, 732 F.2d 1058 (2d Cir. 1984) (involving both for profit and not-for-profit use of a fieldhouse), which confirmed that income from services rendered to outside parties in a commercial manner can constitute UBTI even for educational institutions.

III. Board Fiduciary Duties and the CPA’s Role in AI Governance

Nonprofit directors owe duties of care and loyalty under applicable state nonprofit corporation acts (e.g., Oregon Revised Statutes, ORS 65.357 and ORS 65.361; Revised Code of Washington RCW Chapter 24.03A; California Corporations Code § 5230–5238; New York Not-for-Profit Corporation Law § 717; Delaware General Corporation Law as applied to nonprofits under 8 Del. C. § 141). The duty of care requires directors to exercise the care that a person in a like position would reasonably believe appropriate under similar circumstances.

Courts have consistently held that ignorance of material risks is not a defense. In Francis v. United Jersey Bank, 432 A.2d 814 (N.J. 1981) — a seminal case in corporate fiduciary duty law — the court made clear that directors have an affirmative duty to be informed about the activities of the organization they govern.

For CPAs, this has a direct implication: if you identify that a nonprofit’s board has adopted an AI tool with material financial, legal, or mission risk and you do not communicate it appropriately, you face exposure under both professional standards and potential negligence theories. American Institute of Certified Public Accountants (AICPA) Statement on Standards for Accounting and Review Services (SSARS) and Auditing Standard AU-C Section 265 (Communication of Internal Control Related Matters Identified in an Audit) create obligations to communicate significant deficiencies and material weaknesses — and an uncontrolled AI deployment with no governance framework may well rise to that level.

CPAs should advise nonprofit clients to adopt a formal AI Governance Policy that addresses:

  • Approval authority for AI tool adoption;
  • Data use and privacy review prior to any AI vendor contract;
  • Regular audit of AI outputs for accuracy, bias, and mission alignment;
  • Clear assignment of human accountability for AI-assisted financial decisions.

IV. Employment Law Risks When AI Touches HR and Payroll

Many nonprofits are now using AI tools for hiring, scheduling, performance assessment, and payroll processing. This creates significant exposure under federal and state employment laws — and CPAs who touch payroll or HR advisory services need to be aware of it.

Title VII of the Civil Rights Act of 1964 (42 U.S.C. § 2000e et seq.) prohibits employment discrimination on the basis of race, color, religion, sex, or national origin. The U.S. Equal Employment Opportunity Commission’s (EEOC) 2023 Technical Assistance Document, Artificial Intelligence and Algorithmic Fairness, makes clear that employers — including nonprofits — remain liable for discriminatory outcomes produced by AI screening tools, even when those tools are developed and operated by third-party vendors. The “disparate impact” theory of liability (Griggs v. Duke Power Co., 401 U.S. 424 (1971)) does not require discriminatory intent — only discriminatory effect.

Similarly, the Americans with Disabilities Act (42 U.S.C. § 12112) and the Age Discrimination in Employment Act (29 U.S.C. § 623) are squarely implicated when AI is used in hiring or workforce decisions. Several states — most notably Illinois (Artificial Intelligence Video Interview Act, 820 ILCS 42) and New York City (Local Law 144 of 2021) — have already enacted laws specifically regulating AI use in employment decisions, including requirements for annual bias audits.

For CPAs providing outsourced HR or payroll services, or who have access to systems that use AI-assisted HR analytics, this is not an abstract risk. Advise your nonprofit clients to review all vendor agreements for AI-related hiring tools and confirm that bias audits are being conducted.

V. Data Privacy: COPPA, HIPAA, and State Privacy Laws

Nonprofits frequently work with vulnerable populations — children, individuals with disabilities, people receiving healthcare or social services. The intersection of AI and these populations creates heightened data privacy obligations.

The Children’s Online Privacy Protection Act (COPPA) (15 U.S.C. § 6501 et seq.) and its implementing regulations at 16 C.F.R. Part 312 impose strict requirements on operators of websites or online services directed at children under 13. If a nonprofit deploys an AI chatbot, personalized learning tool, or digital outreach platform that collects data from minors, COPPA compliance is non-negotiable. The Federal Trade Commission (FTC) has signaled aggressive enforcement in this area — its 2023 Policy Statement on Education Technology is a clear warning.

Health Insurance Portability and Accountability Act (HIPAA) (45 C.F.R. Parts 160 and 164) governs healthcare nonprofits — hospitals, community health centers, mental health organizations, and substance abuse programs. AI tools that process protected health information (PHI) — even indirectly, through predictive analytics or care coordination software — trigger HIPAA’s technical and administrative safeguard requirements. Nonprofit CPAs performing financial audits of healthcare organizations need to assess whether AI-related data flows are covered in Business Associate Agreements (BAAs) with technology vendors.

At the state level, CPAs must now navigate a patchwork of comprehensive privacy laws: Oregon Consumer Privacy Act (OCPA), a broad comprehensive consumer privacy law codified at ORS 646A.570–646A.589. Washington has not enacted a general comprehensive consumer data privacy law like Oregon’s OCPA (despite earlier attempts such as the Washington Privacy Act). Instead, it passed the My Health My Data Act (MHMDA), codified at RCW 19.373, which functions as a sweeping health-privacy regime with comprehensive-style obligations.

A key provision of the law in Washington is the potential damages. Strict opt-in consent requirements for collecting, processing, sharing, or selling “consumer health data” (defined very broadly to include any information that could identify or be linked to a consumer’s health status, conditions, treatments, or even inferences about health—going well beyond traditional medical records). Requires privacy notices, data minimization, and safeguards. Violations are enforceable as per se violations of the Washington Consumer Protection Act (RCW 19.86), with a private right of action (unlike most state privacy laws)

Several other state laws contain provisions specifically addressing automated decision-making — giving individuals the right to opt out of decisions made solely by AI systems. Nonprofits using AI to screen applicants for services, benefits, or programs must understand these obligations.

VI. Charitable Solicitation and AI-Powered Fundraising

Nearly every state regulates charitable solicitation, and the use of AI in fundraising adds several new compliance dimensions. CPAs who review nonprofit financial statements or assist with multi-state compliance filings need to flag the following:

Deepfake and Synthetic Media Risk: Several states are beginning to regulate AI-generated content in commercial and political contexts. While nonprofit-specific regulation is nascent, the FTC’s general authority under Section 5 of the FTC Act (15 U.S.C. § 45) — prohibiting unfair or deceptive acts or practices — clearly covers AI-generated donor communications that misrepresent the nature of the organization or the use of funds.

Donor-Advised Funds (DAF) and AI Screening: As DAF platforms increasingly use AI to screen grant recommendations and flag compliance issues, nonprofits that rely heavily on DAF contributions should be aware that algorithmic screening could inadvertently disqualify them based on inaccurate data. CPAs should help clients maintain accurate, up-to-date public information.

State Registration: The Charitable Solicitation Acts across all 50 states vary significantly. Some, like California (Cal. Gov’t Code § 12580 et seq.) and New York (NY EPTL Art. 8), require registration of third-party solicitors and fundraising counsel. When an AI platform functions as an autonomous fundraising agent — sending solicitations, processing donations, and engaging donors — there is an argument that the vendor may be functioning as a paid solicitor. CPAs should flag this analysis to legal counsel.

VII. The Emerging Federal AI Regulatory Landscape

AI legislative efforts to date draw a sharp contrast between the federal government, which does not want to stop the technology development fearing it would hamper innovation and creativity; and the primarily coastal states California and New York take to mirror European Union style oversight which seeks to regulate or bar AI usage to protect consumers and civil liberties. There has been no AI specific federal unifying legislation but existing laws, such as consumer and fair trade practices have been pleaded in court successfully. Most nations in Asia and Africa do not have such regulatory restrictions as the United States’ and the European Union.

While federal AI-specific legislation remains in flux as of early 2026, CPAs advising nonprofits should be monitoring the following the non-legislative trends:

Revoked Executive Order 14110 (October 2023) — “Safe, Secure, and Trustworthy Artificial Intelligence” — directed federal agencies to develop AI risk guidance regulatory frameworks across sectors. Several of the resulting agency guidelines, including those from the Department of Education, Department of Health and Human Services, and the FTC, have direct relevance to nonprofits receiving federal grants or operating in regulated sectors. The approach now is to let the market develop before it is regulated.

The National Institute of Standards (NIST) AI Risk Management Framework (AI RMF 1.0) — while voluntary, the NIST framework is rapidly becoming the de facto standard for demonstrating AI governance due diligence. For nonprofits that receive federal grants, alignment with NIST AI RMF is increasingly a best practice — and may soon become a contractual requirement in federal grant terms. CPAs conducting single audits under the Uniform Guidance (2 C.F.R. Part 200) should be asking clients whether their AI tools are inventoried and risk-assessed.

State AI Legislation: Among the most aggressive regulators which has resulted in AI companies leaving the forum is the Colorado enacted SB 24-205, the Colorado AI Act, effective February 2026, which imposes obligations on developers and deployers of “high-risk” AI systems — defined to include those making consequential decisions in employment, healthcare, education, and public accommodations. For nonprofits operating in Colorado and other states enacting similar legislation (Texas, Illinois, New York, California and others are in active legislative sessions), this creates compliance obligations that will surface in financial audits.

VIII. Practical Recommendations for CPAs

Based on the foregoing legal landscape, here is a practical framework for CPAs advising nonprofit clients:

  1. Conduct an AI Inventory Audit. Before any financial engagement, ask your nonprofit client to identify all AI tools currently in use — across finance, HR, fundraising, program delivery, and communications. Map each tool to the underlying vendor contract and assess data flows.
  2. Review Vendor Contracts for Data Provisions. Flag any provisions that allow the vendor to use the nonprofit’s data to train AI models. Assess whether this creates UBTI risk, private benefit risk, or HIPAA/COPPA compliance issues.
  3. Assess Internal Controls Over AI-Generated Financial Data. Under AU-C 315 (Understanding the Entity and Its Environment), auditors must understand the information systems — including AI-generated outputs — that are relevant to financial reporting. AI-generated financial data should be subjected to the same internal control scrutiny as any other significant information system.
  4. Advise on Board AI Governance. Recommend that nonprofit boards adopt a formal AI policy. This not only mitigates fiduciary risk — it protects the CPA by establishing that governance recommendations were made and considered.
  5. Flag Multi-Jurisdictional Compliance. For nonprofits operating across multiple states, map applicable data privacy, charitable solicitation, and AI employment laws in each jurisdiction of operation.
  6. Coordinate with Legal Counsel. The intersection of AI and nonprofit law is evolving rapidly. CPAs are not lawyers and should not provide legal advice — but they should be fluent enough in these legal frameworks to recognize when a client needs to be referred to specialized legal counsel. Building a collaborative relationship with a nonprofit-focused attorney is now a professional necessity.
  7. Be the Old Kid on the Block. Do not experiment with technologies you do not understand or know enough to comply with. There are cases of AI agents being given credentials by the credential holders to automate a task, ending in data disclosure, destruction or data corruption. If you feel the need, put an air wall between your work and the client’s systems if there is doubt.

Conclusion

After practicing in technology law for 30 years, AI is the newest wave and will manifest in ways predicted and non-predicted. The emergence of artificial intelligence in the nonprofit sector is not a distant future scenario — it is the present reality facing your clients today. The legal frameworks governing nonprofits were not written with AI in mind, but they apply to it nonetheless, and regulatory agencies are actively developing AI-specific guidance and enforcement priorities.

As a CPA serving nonprofits, your role is evolving from financial reporter to strategic advisor. The CPAs who will distinguish themselves in this era are those who understand not just the numbers, but the legal environment in which those numbers are generated — and who have the courage to ask hard questions about AI before problems become crises.

The law in this area is moving fast. Keep learning, keep asking questions, and keep your nonprofit clients one step ahead. Follow the written statutes, decided cases and position of your regulator..

For assistance on AI and its impact on business, nonprofits, and contracts, contact Buckley Law attorney Martin Medeiros at 503-620-8900.

Martin Medeiros is a Shareholder at Buckley Law. With more than 20 years of experience, his practice area encompasses a range of services to clients including business formations and transactions, intellectual property, technology applications and IT, business succession management, privacy and security, and copyright and trademark law. Martin helps organizations build value by treating intellectual property as a strategic asset.

This article is intended for general informational purposes and constitutes attorney commentary on legal and regulatory trends. It does not constitute legal advice. Readers should consult qualified legal counsel regarding specific matters affecting their clients.

The provision of this material does not create an attorney-client relationship between the firm and the reader, and does not constitute legal advice. Legal advice must be tailored to the specific circumstances of each case, and the contents of this article are not a substitute for legal counsel. Do not take action in reliance on the contents of this material without seeking the advice of counsel.