Artificial Intelligence: Safeguard Your Business From Negative Effects

Mitigating the Negative Effects of Artificial Intelligence

In the past year, it has become clear that generative AI is not only here to stay but is altering the landscape of virtually every aspect of Corporate America—legal, HR, IT, IP, media and entertainment, marketing and advertising, public relations, and operations. Some companies are forging ahead, embracing the new technology as quickly as it becomes available. Others are more cautious, withholding judgment (and implementation) until more is known. Yet, even those companies who have not formally authorized the use of generative AI likely have employees, independent contractors, and vendors who are using the technology in their daily work without prior approval. Regardless of which camp your company is in, how do you mitigate the risks associated with such use?

Understanding the Basics of Generative AI

While sophisticated algorithms have long been part of the most used technologies and applications, generative AI brings a new frontier of risks versus benefits. The first step is understanding the basics of AI. Generative AI is artificial intelligence technology that can generate new content ranging from text to images to even music based on its input or “training” data. “LLMs” are machine learning models— “large learning models”—that predict and generate text. LLMs are fed and trained on huge data sets that teach them how characters, words, and sentences interact. Such technology does not understand context and cannot determine whether it is being used to provide the answer the user wants or the answer that is factually accurate.  Thus, generative AI can experience “hallucinations,” where the technology generates factually inaccurate or even nonsensical information despite appearing plausible or coherent. Hallucinations occur when generative AI employs the patterns it has learned from training data rather than factual accuracy. Such conclusions are often presented with a confidence that belies their unreliability.

Given the nature of the technology, its uses can be found across almost every aspect of a company’s business. Legal departments are using generative AI to assist in the automated drafting of contracts or on-demand searches for specific clauses and even legal research and drafting briefs, while HR may utilize the technology to wade through a sea of applications. In the context of marketing and advertising, generative AI can conduct competitive research, interpret consumer behavior patterns, analyze website traffic, and predict how advertising will perform. And by analyzing large amounts of data and predicting outcomes, generative AI can make help streamline operations—a type of virtual assistant. In fact, regardless of whether your company has officially adopted or authorized the use of generative AI in connection with your business, it is likely your employees, independent contractors and vendors (and the vendor’s employees) are already incorporating generative AI into the roles without your knowledge or express consent. A McKinsey study conducted in August last year found that 27% of employees born in 1981 or later were already using AI regularly for work. And since legal governance of AI is still the Wild West, those employees, contractors and vendors may be exposing your company to potential liability and other risks of harm.

Identifying the Risks of Artificial Intelligence

Privacy/data security, accuracy, and intellectual property infringement appear to be some of the highest generative AI risks; in fact, we’ve seen waves of litigation in the past year largely involving these topics. Until more AI-specific regulation exists, such lawsuits are expected to continue. Other risks posed are the ethics of use, particularly by in-house counsel, and the potential risk to a company’s own intellectual property rights.

One of the most pressing concerns is data security and privacy. Since generative AI tools are built around LLMs that retain, learn from, and reuse the data fed into them, a company’s internal data (and even that of its consumers) could be at risk of inadvertent disclosure. Still further, it could be possible for a company’s confidential or other internal data—even trade secrets—to be incorporated into generative AI and then utilized to benefit the company's direct competitors or used in non-company related matters.

Intellectual property infringement/protection is also a large concern.  Feeding company data into the “input” on generative AI technology could put the company at risk of infringing third-party marks. In other words, if generative AI is trained using the copyrighted material of others, is the output generated by AI technology a derivative work that infringes on third-party copyright? In the last year, we’ve seen a large number of these lawsuits filed against the creators of generative AI technology like OpenAI. To the extent these cases make it all the way to trial, we could begin to see more of a framework for when generative AI use does or does not result in third-party infringement and whether that liability would extend all the way down to the user of the technology rather than its creator. Companies should also be aware of potential risk to their own intellectual property by the input of such data or material into generative AI. On the flip side, what about content internal to the company that was created using generative AI? Can it even be protected? In the U.S., works not created by human authorship are not entitled to copyright protection. Although unclear, that likely means that content generated for your company using AI is not protectable under existing copyright laws. 

Inaccuracy or the risk of generating false information is another important risk. Generative AI relies on the subset of data on which it is trained and cannot distinguish between accurate and inaccurate data. In the legal sphere, alone, AI has already been known to generate completely fake case cites and content in legal briefing. These AI hallucinations often appear to be completely coherent and are offered with a level of confidence that would suggest accuracy. This is because generative AI is meant to identify and apply patterns and does not determine the accuracy of its content. In 2023, Rolento Mata, a seasoned attorney in New York, used generative AI to create legal briefing. The application he used—ChatGPT—cited false caselaw that resulted in Mata being sanctioned by the judge in his case.

The ethics of use are also a key risk, both internal to legal departments and elsewhere in the company, such as in human resources where inadvertent biases may come into play in the analysis of employee or applicant data.  Specific to in-house counsel, ABA Model Rules 1.1 (duty of technical competence), 1.6 (duty of confidentiality) and 5.3 (duty to supervise non-lawyers) need to be front of mind in implementing any use of generative AI.

Solutions to Protect Your Business from the Risks of AI

1. Determine how AI currently is in use or could be used by your company’s employees, contractors, and vendors and identify the risks specific to your business.

The first step in mitigating the risk of generative AI technology to your company is to determine the extent to which the technology is already in use, or could be used, by your company employees, contractors, and vendors. This will necessarily require an analysis of each department internal to your organization, and not just the employees within each such department, but any contractors or vendors already under contract with the company. Next, identify company-specific risks. Are current employees, contractors or vendors currently using generative AI without any parameters? Do the company’s third-party contracts cover or provide guidelines for the use of generative AI and the apportionment of liability should something go wrong? Is anyone monitoring what data is input into generative AI, and how long (or if) that data is retained or reused? Who is responsible for determining the accuracy and lack of bias of any content generated by AI for use by your company? Understanding how generative AI is or could be used by your company and what risks arise out of such use can help you identify what parameters need to be in place.

2. Meet with Existing/Prospective AI Vendors

 The next step is to meet with (or equip your IT department to meet with) all existing and prospective vendors in order to understand for each vendor how generative AI is used (or will be used) and how data is stored, protected, transferred, removed, etc. For example, since generative AI depends on and learns from its “input,” will any of your company data input into various AI platforms be used to train the application? Can your company even enjoy the benefits of a specific AI platform if it does not contribute its own data to the “training” of the platform? Is the data stored by that application and, if so, for how long? Is there an option for automatic deletion of the data or must removal of the data happen manually by someone internal to your organization? Will the company’s data input into the application be shared with third parties? How will it be shared and what is the process for protecting the data during transfer?

In addition to understanding how each vendor uses or will use generative AI and any input from your company, it is also important to review each vendor contract to determine whether provisions exist to protect and indemnify your company from potential liability arising out of the use of each platform.  So far, Microsoft and Adobe (and a few others) have indemnified some customers against certain claims like copyright infringement. Knowing exactly what indemnification is/is not offered under each application will help you assess and mitigate risk.

3. Outline Acceptable Uses Across All Departments and Jurisdictions

Another key factor in mitigating company risk arising out of the use of generative AI is establishing and implementing clear guidelines outlining which uses of generative AI are acceptable and authorized by the company, and which are not, and provide as much detail as possible. Which specific platforms may be used internal to the company and by which departments? Which employees are allowed to participate in such use and who at the company will be responsible for determining what data to input? Many companies are establishing an oversight person or committee  to monitor such use and any outputs for privacy, accuracy, and infringement concerns. Beyond establishing these official guidelines for use, it can also be helpful to identify a process for assessing compliance as well as any risks versus benefits analysis. Involving your stakeholders in the development and implementation of these internal policies can help identify practical considerations and also ensure greater “buy-in” and compliance company-wide.

4. Be Aware of Existing Laws and Monitor AI Regulation Going Forward

Depending on the jurisdictions in which your company operates, existing privacy, intellectual property, employment and other laws may apply to generative AI use. And more AI-specific regulation is sure to come. The FTC recently finalized a rule prohibiting the impersonation of government agencies and businesses. Almost immediately thereafter, based on the rapid development of generative AI, the FTC issued notice of a supplemental rule that would prohibit the impersonation of an individual in a matter affecting commerce. Even prior to this activity, the FTC issued a joint statement with the Consumer Financial Protection Board, Department of Justice, and Equal Employment Opportunity Commission that each agency would be using their respective enforcement authority to regulate use of generative AI to protect consumers from discrimination, bias, and other harms. Likewise, both California and New York have already passed automated decision-making regulations specific to the use of AI in human resources. Many other local, state, federal, and worldwide regulations have been proposed and/or are pending specific to AI use in various contexts, including Canada’s pending Artificial Intelligence and Data Act and the EU’s AI Act or Directive on AI Liability, which came to terms in December of last year. Likewise, a sweeping executive order signed by President Biden last October called for standards and testing for all AI models. Some state bar associations are also in the process of preparing ethics guidelines for lawyers. As the legal and regulatory landscape for AI continues to evolve, in-house counsel and executives will need to remain up-to-date on the changing and emerging laws and regulations regarding AI use.

5. Analyze Existing Insurance Coverage

One largely untapped area of importance to companies is whether their existing insurance policies provide any coverage for AI-related liability.  Given how AI suddenly appeared on the scene and how rapidly the technology has advanced, it is likely existing form policies do not provide adequate protections against AI-related liability and will need to be updated to cover the growing risks of such technology.

6. Start Slowly

Finally, once your internal analysis is complete and procedures in place, you can mitigate the risk of generative AI by starting slowly. While you may wish to rush to employ AI in order to be “first” to the scene, implementation of AI must be effective and sustainable and the benefits must outweigh the risks.  Incorporating specific AI technologies over departments and jurisdictions slowly and assessing as you go, tweaking internal policies, third-party contracts, and insurance coverage as needed, will allow you the greatest chance of mitigating the risks of this new and rapidly developing technology.

As with most aspects of business, they keys to mitigating risk related to generative AI technologies involve analysis of ethical purpose, accountability, transparency, fairness and non-discrimination, privacy, confidentiality, and proprietary rights, legal compliance, and ensuring the safe, reliable, and secure use of technology. Following these initial steps will get you started in the right direction and will allow your company to enjoy the benefits of generative AI while also limiting the risk of such use.

For more information about cyber security and technology law, see the Klemchuk PLLC Industry Focused Legal Solutions pages.


Klemchuk PLLC is a leading IP law firm based in Dallas, Texas, focusing on litigation, anti-counterfeiting, trademarks, patents, and business law. Our experienced attorneys assist clients in safeguarding innovation and expanding market share through strategic investments in intellectual property.

This article is provided for informational purposes only and does not constitute legal advice. For guidance on specific legal matters under federal, state, or local laws, please consult with our IP Lawyers.

© 2024 Klemchuk PLLC | Explore our services