Overview

The rapid adoption of generative AI holds great promise for innovations that create new opportunities for many organizations and individuals. It is also accompanied by risks, some of which are understood today and others that are emerging or yet to be discovered. As such, any organization developing new capabilities and content with generative AI should have an appropriate use policy in place.

A deliberate and well-conceived use policy can help an organization promote innovation using generative AI technology while managing risks and allowing for changes as the landscape develops.Sophos has developed a working use policy for generative AI so that our employees can securely pursue new innovations that could benefit our customers and partners. We have since received many requests to share our policy, and we concluded that if it could help our customers, partners, and the industry in general, we would do so.

The discussion, examples, and copyable content in this document can be used to develop an approach—formal or informal—to using generative AI in a company’s business. We hope that this policy framework will be help guide the use of generative AI during the early stages of exploration and discovery. As every organization has custom approaches to strategy and execution, it follows that the use of generative AI should be tailored to the organization that adopts it.

This document is not a template, and it is intended for informational purposes only. This content is not meant to constitute legal advice, and we recommend consulting with a legal or professional advisor before adopting or implementing any policies based on the suggested guidelines and topics below.

Definitions and considerations

‘Policy’ definition

Here we use the term “policy” in its most general sense to mean rules or guidelines that an organization establishes to govern behavior; in this case, the use of generative AI within the organization. A policy in this context may be formally adopted through a corporate approval process or instituted more informally. An organization’s approach to generative AI, as we are addressing it here, may be called a policy or something else. Although it is important for an organization to consider how it communicates expectations, we are focused here on the general approach to generative AI, not on providing guidance on what a policy might be called or how it is implemented or enforced.

The scope

One initial consideration is who the generative AI use policy covers. For example, it could apply to all employees and/or third parties who interact with the organization, such as contractors, vendors, and technology and sales partners. Or your policy could cover a subset of one or all of these entities.

Another question to consider is the scope of technology that will be covered in the policy. For example, “generative AI” may refer to a category of technologies trained on data sets that can generate text, images, video, sound, or other work content (output) in response to prompts (input). Examples include ChatGPT/Bard (text-to-text/image), GitHub CoPilot (text-to-code), Midjourney/Stability AI (text-to-image), ModelScope (text-to-video), and programming language code. Generative AI can also appear as a feature in another application.

Generative AI considerations

Generative AI has the potential to deliver significant benefits by increasing efficiency and productivity. At the same time, however, current implementations may carry risks, including inaccurate or unreliable outputs (“hallucinations”), biased or inappropriate outputs, security vulnerabilities, intellectual property (IP) and privacy concerns, legal uncertainties, and vendor license terms and conditions that may be unacceptable to a given organization. Additionally, there are legal uncertainties around whether generative AI outputs qualify for IP protection and the ownership of any generative AI-created content. When integrating a generative AI implementation into your organization’s processes or applications, it is therefore crucial to clearly identify the materials created using generative AI tools to avoid potential complications with any company IP.

Because of the ongoing rapid development of generative AI and its evolving risks, organizations can benefit from a use policy to responsibly adopt generative AI, as outlined below.

Updates and revisions

The rapid innovation in the generative AI domain suggests that a policy should be reviewed regularly and adjusted as necessary. The state of the art is changing so fast, as is the legal and regulatory landscape, that neglect may lead to irrelevance.

Generative AI adoption, implementation, and use

Numerous vendors have developed generative AI implementations with different methods of access (e.g., chat interface, API) through different types of accounts (e.g., personal accounts, free accounts, paid accounts) and under different user terms. A more technical question for organizations is how to allow employees and business partners to access and exchange information with generative AI.

Like other applications used for business purposes, some companies may limit the use of generative AI to corporate accounts. If there is value to the organization, they may require the use of accounts that have terms and conditions that are acceptable to the company. In this respect, it is helpful to think of generative AI options similarly to how you might be engaging with SaaS vendors and other cloud service providers that operate in part through the collection of your data.

Implementing/adopting a new generative AI platform

Another consideration is the approval process required for adoption of a generative AI platform. For example, the acquisition of a new generative AI platform for use by an organization (whether as a standalone application or as a feature of another system) might require following the organization’s standard procurement process, supplemented with inquiries and terms specific to generative AI.

Steps that could be taken prior to an implementation could include:

  1. Approval by appropriate functional stakeholders, where applicable, from internal organizations such as product management, engineering, data privacy, legal, security, and risk management. Some of these functions could obviously be consolidated under fewer stakeholders.
  2. A technology assessment of the commercial options that covers areas such as the following:
    • Source and quality of the training data set
    • Whether inputs and outputs become part of the training data set and if there’s an ability to opt out of having the input/output data used to train the generative AI model
    • The risks of using the generative AI model and internal mechanisms to mitigate/manage them
    • Ability to comply with the generative AI system’s terms and conditions
    • Commercial implications and associated license entitlements
  3. A business assessment of the planned implementation that accounts for factors such as the following:
    • Implementation cost
    • Expected return on investment (ROI)
    • The development of tracking mechanisms for helping calculate actual ROI
    • Planned implementation usage, which is explored in detail in the next section

Use of approved generative AI

  1. Each new use case of generative AI could be subject to an approval process. For example, one possibility is to name a designated approver for each functional stakeholder so that learning is concentrated and accelerated.
  2. Use of safety features. If applicable, each user could be required to enable all available safety features, monitor for new safety features, and enable the new features as they become available.

Prohibited by default, approved by exception

In some cases, it may be useful to require the review and approval of generative AI usage that falls outside the standard set of approved uses. For example, the use of generative AI could be prohibited unless approved by exception. If this is the approach taken, it could be important to update the list of approved use cases regularly due to the speed of innovation.

For example, the following types of use could be prohibited unless specifically approved:

  1. Usage that necessitates the following categories of input, whether in whole or in part:
    • Any confidential information or business-sensitive information
    • Any personal data or any information that identifies the organization
    • Any of the organization’s IP
    • Proprietary computer code
    • Any information pertaining to the organization’s customers, suppliers, partners, or other protected information, including personally identifiable information (PII)
    • Any information about employees
    • System access credentials (for the organization’s systems or those of any third party)
  2. Usage where the output potentially affects the rights or obligations of any person
  3. Incorporation of the output into the organization’s technology or other IP
  4. Usage that violates the organization’s policies, contractual obligations, or the technology’s terms and conditions for use
  5. Any unlawful usage or usage that demonstrates unethical intent (e.g., disinformation, manipulation, discrimination, defamation, invasion of privacy)

Code written by generative AI

Implementing generative AI to rewrite existing code into modern, memory-safe languages is a complex and ambitious undertaking. It involves several technical, ethical, and practical considerations. For example, here is a sampling of issues to consider:

  1. Quality and reliability: preserving the functionality of the original code while adhering to modern memory-safe practices
  2. Security and vulnerability analysis: a thorough review of the generated code to validate secure practices
  3. Performance:assessment and optimization of the generated code so that it meets or exceeds the performance of the original code
  4. IP rights: establishing the IP rights for AI-generated code, a complicated matter that current legal frameworks may not fully address
  5. Data privacy and compliance: taking appropriate data protection measures, in accordance with relevant regulations, to avoid inadvertently exposing sensitive or personal data used in training the generative AI models

Usage of ChatGPT and similar tools for personal productivity

In some organizations, the use of generative AI platforms may be permitted for the purpose of increasing personal administrative productivity, as illustrated below. Any such use should be subject to the following:

  1. Avoidance of any of the organization’s prohibited uses
  2. Adherence to the applicable terms, conditions, and policies
  3. Where available, opting out of training data set contributions prior to use
  4. Verification of the accuracy/reliability/appropriateness of the output prior to implementation

Examples of permitted business use of ChatGPT (and similar free generative AI tools) through personal accounts:

  1. Fact-checking or research similar to using Google search, Wikipedia, and other internet resources
  2. Creating first drafts of routine emails and internal documents
  3. Editing documents
  4. Generating basic ideas (e.g., creating a list of social activities for an offsite, describing how a particular code block works, detailing how to write a particular function)

An organization may decide to develop training and/or certification for individuals who will use generative AI. Current security training courses could be updated to include the risks associated with using generative AI.

It is also worth considering the consequences of a user not following the framework. These guidelines likely will be decided similarly to other formal or informal company policies.

Final thoughts

Generative AI will impact many aspects of an organization, with known and unknown risks that need to be skillfully mitigated. We are in the early stages of understanding the technology’s impact, and forward-thinking organizations will not limit the innovation possibilities with generative AI. Reducing risks while encouraging exploration, curiosity, and trial and error will be the hallmark of the winners in this new age.

Taking a skillful approach to establishing use policies tailored to an organization’s likely use cases is a good first step as the world adapts to generative AI and its many possibilities. Beyond this, policies and guidelines could be integrated into a larger governance and risk management strategy, which may include forming a steering committee, conducting regular audits and risk assessments, and establishing ongoing policy refinement processes to balance the responsible use of generative AI with appropriate risk mitigation efforts.