Should My Company Use ChatGPT for Commercial Use?
Why It’s Time to Develop Your Company’s Policies on Artificial Intelligence
By Chad Higgins, with support from guest contributor, Summer Associate Kiara Anthony
On the surface, it’s exciting to consider the opportunities that generative artificial intelligence (AI) will bring to your business. But beware—while leaders in the AI field signed an open letter about AI’s risks to humanity, others have compared it to disruptive, life-changing technology like electricity and nuclear energy. Italy initially banned its use over data privacy concerns, and both Meta and OpenAI have been sued for copyright infringement. There’s a lot to learn, and as we navigate AI as a society, there are immediate legal risks to know if you’re considering using it, especially ChatGPT, for your business.
What is ChatGPT?
ChatGPT is an example of a large language model—a form of AI that can perform natural language processing. This technology can process language inputs (text, speech, audio, etc.), analyze the data input, then generate an output. Since the technology launched in November of 2022, people have explored ways to leverage ChatGPT for more efficient workflows.
The name ChatGPT is derived from the term “Generative Pre-Trained Transformer,” describing computer “neural network models” that use transformer architecture. Neural network models act like the human brain. The more information inputted, the smarter the model becomes. This information can be retained in the memory of the networks and reinforced in a way that leads to disinformation and bias. That means your company’s information is making the network and those who have access to input information smarter, while being subjected to potential misinformation and bias.
What do I need to know about the risks of using ChatGPT?
Most simply, companies and employees need to know that any information put into ChatGPT immediately becomes public. An employee using AI to summarize a company document, write a business memo, or draft a letter to a vendor, may unknowingly expose company and/or client information, resulting in major financial exposure or liability for the company at the state and federal level. Samsung banned the use of Gen AI systems after employees used the technology to submit code to identify a defect and auto-generate meeting minutes, which led to a data leak. Other major companies have followed suit. For example, Apple, Bank of America, JP Morgan Chase, Goldman Sachs, and Verizon are among the growing list of companies banning ChatGPT use.
AI technology companies often have their own Terms of Service, and it’s important for uses to understand that:
- Input information (including inquiries, also known as “prompts”) may be subject to disclosure to third parties, including law enforcement and vendors.
- Input may not be deleted.
- Input may be used to develop algorithms, software, and output to unrelated third parties.
- Personally identifiable information (PII) will be retained.
Additionally, AI technology terms and conditions are inundated with various legal loopholes and limitations, such as mandatory arbitration clauses, restrictions on mass filings, class action waivers, and shifting legal responsibility of generated content to users. These ingredients are the perfect recipe for limiting liability for AI companies, while increasing your company’s liability and exposure.
Five examples of the legal risks
Using ChatGPT puts company and client information at risk. It is important to be extremely cautious and intentional when inputting any information, especially copyright, trademark, proprietary or personally identifying information.
Intellectual Property and Copyright Risks
Companies should understand that OpenAI “assigns to you all its right, title and interest in and to Output,” however a caveat is that if OpenAI does not have any rights, title, or interest in the original content, so it really has nothing to assign and your company’s use it could be a continued infringement on a third party’s rights.
Fraud and Misrepresentation
Gen AI raises various concerns related to fraud and the potential for employees to misrepresent AI-generated work as their own—even attorneys. A New York federal judge sanctioned two attorneys and a law firm for having “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”
Academic and research misconduct are also a concern. For example, a student may cheat by requesting ChatGPT to write a 600-word AP English essay on a specific topic and submitting the generated output as their work. New York City public schools banned ChatGPT based on concerns about safety, accuracy, and plagiarism, but later rescinded the ban to allow educators and students to learn more about this potentially game-changing technology.
Misinformation and Bias
The ChatGPT FAQ from July 6, 2023 states that it “is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.” Additionally, “ChatGPT will occasionally make up facts or ‘hallucinate’ outputs.” The AI models are “trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like. It is important to keep in mind that this is a direct result of the system’s design and that such outputs may be inaccurate, untruthful, and otherwise misleading at times.”
High Regulatory Compliance Risk
What steps can my company take today to mitigate risk with AI technology?
Organizations should proactively develop their policies on ChatGPT and AI technology due to high-risk exposure to monetary penalties and potential claims in regulatory compliance, intellectual property, cyber fraud, consumer protection, and ethics. Here are six steps to get started today:
- Develop company-wide policies regarding ChatGPT and AI use, and routinely revisit your policy on ChatGPT and AI. As the technology rapidly evolves, it’s important to revisit policies frequently, not simply as issues develop.
- Engage in discussions with insurers or insurance brokers to understand whether liabilities resulting from AI use would be covered under the current insurance policy.
- Create a company-wide training: this field is new to everyone, and it’s important to assume that every employee has a limited knowledge of the technology, and most importantly, the risks. Develop training for employees, including on-boarding training for new employees during orientation.
- Provide external transparency regarding your company’s policies.
- Conduct routine risk assessments, and monitor for compliance while establishing consequences for non-compliance. Make sure this is communicated to employees during training.
Seek external, legal review of your company’s ChatGPT and other AI policies
It’s important to know that you’re not alone—there are experts who can help you navigate new policies and procedures and help you proactively protect your company. For more information about these technologies and issues, and guidance on protecting your company, please contact Kevan Deckelmann or Chad Higgins.