Protecting Artificial Intelligence Intellectual Property
Our team has deep experience assisting companies in protecting their AI intellectual property, including in training data, foundational models, and generative outputs. We have advised on IP and copyright infringement in the AI space, including on ownership of valid AI and output IP/copyright, copying of original elements/code, and substantial similarity/derivative work assertions.
Artificial Intelligence Procurement & Vendor Due Diligence
We negotiate technology contracts concerning AI/ML tools and other technologies, helping businesses to leverage existing procurement protocols and information security processes in the technology space. We work to negotiate critical terms in AI contracts, including data ownership and use rights, representations and warranties for data models, transparency, and audit rights, including bias audit assessment requirements, liability and indemnification provisions addressing legal, reputational, and organizational risks of deploying a vendor AI/ML tool. We work with clients to evaluate Tech E&O and other potential insurance coverage options to mitigate risk.
Building out Artificial Intelligence Governance Function & Addressing Data Privacy
AI governance is the ability to direct, manage and monitor the AI activities of an organization. Our team works with businesses to develop internal, enforceable policies and procedures; external notices and rights protocols; and cross-functional compliance team to be responsible stakeholders for business AI deployment. We draft internal AI Governance Policies to sets forth standards for pre-deployment evaluation (design, testing), use parameters and metrics, and post-deployment model validation for all AI/ML tools will make sense. We also work with businesses to identify measures for continuous monitoring for fairness, quality, and technical drift or creep, and will set forth the organization’s strategy for AI oversight.
If training data includes personal information, we work to make updates to Privacy Policies, Employee Handbooks, and other just-in-time notices and disclosures as may be required. Existing state privacy laws, such as the California Consumer Privacy Act and NY Local Law 144, require that businesses provide notice to consumers whose information is being processed by an automated or AI/ML tool for certain types of decision making. We advise businesses on whether additional requirements, such as obtaining opt-in consent or posting the results of bias audits for these tools, may be required.
Auditing for Ethical Biases, Discrimination & Adverse Impacts
We conduct AI Risk Assessments and Classifications for AI/ML use cases, work to identify potential risks of the AI/ML tool, including risks to the organization (i.e., reputational, interruption), the general public or to an identified stakeholder class, and associated ethical concerns. We leverage risk frameworks to measure (low, medium, high) AI risk, assign risk probability, and identify pre-deployment risk mitigation strategies and safeguards. We work with several selected outside vendors to perform independent audits of AI tools and governance programs under the attorney client privilege framework, to evaluate AI tools for ethical and selection bias under anti-discrimination laws (e.g., NY Local Law 144, Equal Employment Opportunity Commission guidance, insurance regulations).
Employee/Board Training & AI Proficiency
Training and AI proficiency efforts work to enhance the technical AI capacity of a business, allowing it to further modernize and innovate. We work with businesses to conduct employee and board of director trainings on the deployment of AI/ML tools, existing and emerging artificial intelligence regulations and governance frameworks, and to support businesses in taking the mystique out of AI and allowing them to achieve their goals when it comes to AI adoption.
With the onset of new public language models and chatbots, like ChatGPT and BARD, we work with businesses to update or develop AI Acceptable Use Policies to address employee use of AI tools and language models, and to protect confidential business information and other data which may be leveraged by employees as inputs.
Artificial Intelligence Incident Response
AI/ML technologies can be subject to security vulnerabilities and intrusions, and cause privacy harm and other real-world impacts on people. An “AI event” can be anything from a publicly posted consumer complaint of discrimination/disparate impact, the failure of an AI tool that is relied upon for critical business functions, or a regulatory investigation into a certain use case.
Just as businesses have and test their cyber Incident Response Plan (IRP) to detect, mitigate and respond to cyber threats, our team works with businesses to develop and test a plan to respond to unintended consequences of AI usage. Increasingly, response plans are a requirement of emerging artificial intelligence laws and frameworks. For example, both NIST AI guidance and the Colorado Department of Insurance proposed regulations require the development of an AI Incident Response Plan that details a plan in the event significant risks develop during the deployment of the AI/ML technology.
Navigating Artificial Intelligence Regulatory Inquiries and Related Legal Challenges
We work with businesses to respond to federal and state regulatory inquiries over the use and marketing of AI/ML technologies, including assertions of false advertising and deceptive business practices by the Federal Trade Commission (FTC). We have defended numerous privacy consumer class actions that involve the use of automated technologies, digital applications, and data sharing, including invasion of privacy, breach of warranty, and other statutory and common law theories.