Risk considerations and guidelines for artificial intelligence.
Introduction
In a world of âinstantâ everything, companies need to work faster and in more agile ways to stay customer-centric and innovate at the speed of consumer expectations. Data needs to be accurate and available immediately. Businesses need to be able to break barriers and provide real-world solutions within their product immediately.
AI is having a profound and transformative impact on many industries, enabling massive increases in efficiency, accuracy, and productivity and spurring faster innovation for companies that know how to leverage it in the right way. However, the technology is still emerging, and not all solutions are up to code. There is room for error and risk if the right approach and tools are not chosen.
We designed this paper to educate decision makers in companies who may be reviewing AI tools or internal deployment from a procurement, risk management, legal, and/or data privacy perspective. First, we explore the different types of AI out there and what you need to know about the various models and use cases, as well as the legal and risk considerations and ways to mitigate those risks. We then offer a pragmatic approach to developing and implementing AI that allows your organization to benefit from the use of AI while mitigating some of the ethical and legal risks. We also introduce AI at Brex in terms of our new tools and what we do internally to provide a secure and best-in-class platform of products and services.
What is artificial intelligence (AI)?
Letâs start with the basics of what AI is and its different types of applications.
AI is a machineâs ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and even exercising creativity. AI has been around for decades and youâve probably interacted with AI even if you didnât realize it. For example, voice assistants like Siri and Alexa are powered by AI technology. They use natural language processing (NLP) and machine learning (ML) to improve performance over time. NLP is an aspect of AI that enables computers to understand text and spoken words in much the same way humans can, and ML focuses on using data and algorithms to help computers âlearnâ and gradually improve their accuracy.
Machine learning has been around for decades and has had an impact in various industries, including achievements in medical imaging analysis and high-resolution weather forecasting.
How is AI classified.
Artificial Intelligence is a broad term and there are multiple types of AI already being used. Below we explain the most prominent types of AI classifications.
Artificial Narrow Intelligence
Artificial Narrow Intelligence is a specific type of artificial intelligence in which a learning algorithm is designed to perform a single task and any knowledge gained from performing that task will not automatically be applied to other tasks. While itâs still very beneficial to users, this is considered âweak AIâ and examples include search engines and recommender systems that you see on Netflix or Amazon.
Artificial General Intelligence
Artificial General Intelligence is the ability of an AI agent to learn, perceive, understand, and function like a human being. These systems will be able to independently build multiple proficiencies and form connections and generalizations across fields, massively cutting down training time. This will make AI systems nearly as capable as humans by replicating our multi-functional capabilities. Examples include chatbots that use natural language processing to analyze what humans are saying and create a response, as well as music AIs.
Artificial Superintelligence
The development of Artificial Superintelligence will probably mark the peak of AI research. In addition to being able to replicate the multi-faceted intelligence of human beings, ASI will be exceptionally enhanced because of greater memory, faster data processing and analysis, and decision-making capabilities.
Common AI terms.
Now let's explore the common AI terms you'll encounter as you evaluate AI applications for your business.
LLMs and GPT models
Fundamentally, this latest revolution in AI is caused by two technologies: large language models (LLMs) and transformers (GPT stands for âgenerative pre-trained transformersâ).
A large language model (LLM) is a prediction engine that takes a sequence of words (such as âthe sky is â) and tries to predict the most likely sequence to come after that sequence (âblueâ). It does this by assigning a probability to likely next sequences and then samples from those to choose one. The process repeats until some stopping criteria are met. Large language models learn these probabilities by training on large corpora of text.
Language models have existed for decades, though traditional language models had many deficiencies. In 2017, however, Google wrote a paper, Attention Is All You Need(1), that introduced Transformer Networks(2), kicking off a massive revolution in natural language processing. Overnight, machines could suddenly do tasks like translate between languages nearly as well as and sometimes better than humans. Transformers introduce a mechanism called âattention,â that allows the model to analyze the entire input all at once, in parallel, choosing which parts are most important and influential. Every output token is influenced by every input token, similar to how a human would read and interpret a sentence. When most companies talk about modern uses of AI, large language models are typically what are being referred to in the current zeitgeist.
Applying AI to your business.
In business, there are three broad layers to employ AI. All three layers should be examined individually for opportunities and risks.
Product AI
This is the implementation of AI as part of a companyâs products and services. LLMs in particular command a new place in products for translating natural language requests into actions, creating a new paradigm for user experience. Product AI can identify user intent, reduce time spent by users completing tasks, and perform actions on the userâs behalf. This is the most visible and impactful type of AI for customers and can be a market differentiator.
Operational AI
This application of AI refers to all of the processes a business needs to do behind the scenes to function efficiently. Operational AI automates many aspects of back office operations that traditionally may require human intervention to compete. The major benefit of operational AI is in operational speed and cost savings.
Internal AI
Internal AI refers to all the new AI tools and services a company may allow employees to use as part of their role to make them more productive. These can be generative tools that create images to specification, writing tools to create internal documents, or tools that consume large amounts of documents to produce easy-to-consume summaries. Many existing productivity tools from companies such as Google and Zoom are including AI features in existing products and offering them to customers today. Using these tools can carry risks if employees begin using them without properly vetting them first.
Understanding the legal risks.
Every organization has unique compliance needs and challenges. Below are risks weâve identified that can be used as a starting point with your internal teams to develop policies and procedures that fit your organization when youâre making decisions around the use of AI.
With such wide use of novel AI technologies, there have been proposed frameworks to enumerate and mitigate associated risks. However, even such risks are novel in nature, as AI presents evolving and complex vulnerabilities that may not be easily categorized within existing frameworks. As new case law and proposed regulation arise from the use of AI, these decisions will impact the regulatory framework and the future of AI in operational use.
Ability to protect work products
Youâll want to consider the copyright implications of using AI to create ideas or products so you can ensure your ability to protect your innovations. The US Copyright Office and the US Patent and Trademark Office have declined to extend copyright or patent protection to outputs created by AI tools, on the basis that only works created by human authors or inventors are eligible for protection in their most recent decision in Thaler v. Perlmutter upholding the requirement for human authorship for copyrights.(3)
Most recently, the US Copyright Office denied protections for another AI-generated image as it was not the product of human authorship.(4) The Office asked the artist to disclaim the parts of the image that AI generated in order to receive copyright protection because it contained more than a minimal amount of AI-created material.
Generally, without these copyright protections, you may not be able to prevent others from copying or reusing the output generated from your input, or to stop an AI platform from using or disclosing identical outputs.
If you are unable to demonstrate and separate the portions that were created by you vs. generated by AI to persuade the USPTO and the US Copyright Office of human authorship, you may not be able to protect your proprietary materials because there is no copyright protection for works created by non-humans, including machines.
Intellectual property risks
Itâs also important to know that outputs created by AI may infringe upon third-party intellectual property rights, both due to the nature of the inputs and the nature of your prompts. Several lawsuits have already been filed against AI platforms, alleging that the use of inputs owned by third parties to train models and generate outputs without permission infringes upon their intellectual property rights and violates other rights and laws.
Several authors are suing over claims of copyright infringement. The suit alleges, among other things, that different LLMs were trained on illegally acquired datasets containing their works, which they say were acquired from âshadow libraryâ websites.
Other companies have filed lawsuits regarding the training of models on images that are protected by copyright to create AI image-generating tools.
If you use AI to generate an output that refers to, or is inspired by, identifiable third-party materials (e.g., requesting an output displaying a character designed by an artist, or that mimics another person), the output may infringe upon that third partyâs intellectual property rights or privacy.
Deepening these risks, most AI platform terms and conditions typically provide no protection against lawsuits based on output, and, in fact, often place liability entirely on the user. This means you may face liability if you generate and use problematic outputs and have no right of indemnification or other recourse to avoid that liability.
Coding input and output risks
If you use an AI platform to develop code, it is critical to understand that these platforms are typically trained on publicly available source code as their inputs â the majority of which are subject to open source licenses(4). Some of these licenses are âcopyleft,â meaning if incorporated into your software, may require that you make your proprietary code available for free based on the same copyleft license. Even âpermissiveâ licenses commonly have attribution or other requirements on distribution. Using AI for code development presents those same risks, but without identifying the applicable licenses for each open source code.
In coding outputs, there may also be bugs, vulnerabilities, or security risks. AI platforms typically disclaim any responsibility for output, which means these platforms may provide source code output that is not reliable and may subject the company to legal and security risks.
Output defects, distortion, and bias
AI is still a developing technology and it is far from perfect. For the most part, outputs are often accurate and suitable for their use case. Other times, they may contain errors, be misleading or deceptive, or be trained on data that is inaccurate to begin with.
AI might also âhallucinateâ (fabricate something but present it as a fact). Remember âThe ChatGPT Lawyerâ?(5) AI can also generate output that is discriminatory, unethical, and offensive to local customs or community norms. It also could be biased as the output may be influenced by the underlying training inputs with these traits.
In 2017, Amazon stopped using its AI recruitment system(6), which was intended to evaluate applicants for various open roles. The system learned how to judge if someone was suitable for a role by looking at resumes from previous candidates. Unfortunately, it became biased against women in the process because women were previously underrepresented in technical roles.
While Twitter has made recent headlines due to Elon Muskâs acquisition and rebrand to âX,â Microsoftâs attempt to showcase a chatbot on the platform was even more controversial. In 2016, they launched their AI chatbot âTay,â(7) which was intended to learn from its casual conversations with other users. Microsoft noted how ârelevant public dataâ would be âmodeled, cleaned, and filtered.â However, within just 24 hours, the chatbot was sharing tweets that were racist, transphobic, and antisemitic. It learned discriminatory behavior from its interactions with users, many of whom were feeding it incendiary messages.
Such defects and biases make it extremely important to note that when using someone elseâs AI model, you likely will have no insight into what kind of data was used to train the model. The risk is heightened where output is used in circumstances in which accuracy or fairness is essential, such as human resource decisions (hiring and performance management) or when providing services or products to customers (access to credit or insurance, the provision of healthcare, etc.).
Confidentiality and privacy concerns
When you submit a prompt on an AI platform, unless you negotiate a contract that says otherwise, the platform may retain rights to reuse that information or to publish the output, and more generally use that data to train their models. Although each platform has different terms and conditions, AI platforms typically do not commit to maintaining the confidentiality or security of the prompts or outputs.
You should also have due diligence for AI service providers. Even if they have the best intentions to accomplish confidentiality and data privacy, architectural and business limitations of AI platforms may mean that the platforms are actually unable to do so. Itâs important to evaluate the third-party risk as you would assess any other vendor getting access to your data and incorporate organizational, contractual, and physical safeguards for these types of incidents.
As a best practice, when providing data for prompts, you should assume that all information you provide will be public â think of these as disclosures to a third party. With that in mind, how much would you actually share?
Itâs especially important to be extra careful with:
Your trade secret information (you risk losing the ability to use your trade secret protection)
Information regarding inventions that you intend to patent in the future (you risk putting such material in the public domain)
Your customersâ or partnersâ confidential information (you may be breaching contractual obligations)
Your sensitive, confidential, or legally privileged information (you risk exposing this and losing confidentiality)
And/or any userâs personal information (you could be violating privacy or publicity laws)
The big takeaway here is that the best way to avoid disclosures and data leaks is to avoid inputting personal information and anything you would need to keep confidential or legally privileged into prompts.
Proposed and existing regulations
Itâs important also to stay on top of emerging AI regulations as AI becomes more widespread and new use cases are tested.
EU AI Act
In April 2021, the European Commission proposed the first EU regulatory framework for AI(8). It says that AI systems that can be used in different applications are analyzed and classified according to the risk they pose to users. The different risk levels will determine the level of regulatory requirements on the different platforms/systems. If approved, these will be the worldâs first rules for AI.
Unacceptable risk: This includes systems considered a threat to people and would be banned. Examples include cognitive behavioral manipulation of people or specific vulnerable groups, such as:
Voice-activated toys that encourage dangerous behavior in children.
Social scoring such as classifying people based on behavior, socio-economic status, or personal characteristics.
Real-time and remote biometric identification systems, such as facial recognition.
High risk: This includes systems that negatively affect safety or fundamental human rights. This would be divided into two categories:
1) AI systems that are used in products falling under the EUâs product safety legislation.(9) This includes toys, aviation, cars, medical devices, and elevators.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
Biometric identification and categorization of natural persons
Management and operation of critical infrastructure
Education and vocational training
Employment, worker management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum, and border control management
Assistance in legal interpretation and application of the law
Generative AI, like ChatGPT, would have to comply
with transparency requirements:Disclosing that the content was generated by AI
Designing the model to prevent it from generating illegal content
Publishing summaries of copyrighted data used for trainingLimited risk: This includes systems needing to comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio, or video content â like deepfakes.
US State Law Tracker(10)
Many states have incorporated the usage of AI or the restriction of automated decision making for instances involving personal data into their data privacy regulations.
Colorado's (SB21-169) legislation
Colorado Division of Insurance (âCDIâ) has finalized a regulation requiring insurers to implement AI governance and risk management measures that are reasonably designed to prevent unfair discrimination in the use of AI models that use external consumer data and information sources. The regulation restricts insurers from using âexternal consumer dataâ and prohibits data, algorithms, or predictive models from unfairly discriminating. It requires insurers to test their systems and demonstrate that they are not biased. The regulation also requires insurers to implement AI governance and risk-management measures that are reasonably designed to prevent unfair discrimination in the use of AI models that use external consumer data and information sources.
Mitigating the risks of AI.
Enforceable policies on the use of AI
Companies using or planning to use AI internally should have an AI policy that takes into account the potential use cases for AI, the risks of using AI, and the companyâs risk tolerance given the potential benefits to its business. This policy should include clear guidelines for employees and contractors that outline:
Use cases that are permitted or prohibited
What company information can be used or not
Which platforms are permitted or prohibited
What steps need to be taken when using AI to mitigate risk
Oftentimes, companies can leverage existing policies and assessments to incorporate AI reviews and guidelines.
These policies are often best developed by a cross-functional team, including legal, compliance, management, IT, engineering, and other internal stakeholders with internal checks and balances to enforce it. Companies should also regularly review new policies and frameworks given the fast-changing legal environment pertaining to AI.
Human oversight for quality control
Given the risks identified above, itâs important to apply human oversight and careful review of any AI outputs before they are used. Output should be subject to the standard quality controls of the business, including accuracy, consistency, and security with your companyâs standards and policies. AI should be used in a way where there is always a âhuman auditâ of the outputs to prevent automated decision making and other potential risks.
Due diligence and risk assessments on AI platforms
As with other third-party vendors, AI platforms should go through a risk assessment and diligence of their systems, terms, and their public-facing statements regarding data, security, and associated practices, before deciding to use them. Companies should ensure they meet standards and expectations and any other obligations that may be passed down to you by regulators or your contractual agreements. Investigate and consider whether the platform:
has adequate security measures in place to protect your prompts and outputs
reserves rights to use your outputs for training models or otherwise
has an enterprise agreement with better contractual terms available
has used data sets (or origins) to train the underlying models you will be using
allows you to use your own training data in a private environment
A lack of this information may also be informative to your decision-making process.
Transparency and diligence
State privacy laws and regulatory bodies such as the FTC are routinely releasing guidance warning businesses about the harms of AI-powered tools including advertising accurately and transparently to their consumers. The FTC has already brought enforcement actions(11) against companies for using AI systems in a manner that was inconsistent with their privacy obligations and, as part of their settlements, required companies to destroy the underlying algorithms they used to develop their AI systems.
Companies should record how they are using AI, including what outputs are created and where they are used within the business and/or product. Transparent AI policies and procedures can help build consumer trust. If you have clear guidelines regarding the use of AI and maintain records of AI use, you can demonstrate that you are aware of the risks and are taking active steps to mitigate them, while still taking advantage of the technological opportunities that AI provides.
How Brex mitigates AI risk.
We believe that AI can be a true game changer for a business, but as with everything else, it requires a thoughtful, measured approach. When leveraging AI in our products, Brex aims to employ AI where it will have the greatest benefit to our customers â in terms of increasing their efficiency, productivity, and accuracy. We have established three principles to guide our AI decisions in our customer-facing products:
Transparency through trust: Brex is transparent in how AI works to automate workflows, guide reviews, and provide insights.
Keep users in control: Brex AI acts as an extension of usersâ own decision making, providing the automations and suggestions they want.
Privacy is paramount: Customer data is protected by strict security protocols and is not used to improve third-party AI models.
Our latest AI offering is Brex Assistant, a first-of-its-kind, proactive assistant that simplifies expense management for employees and improves their productivity. Brex Assistant is part of our Brex AI suite, which uses AI to deliver powerful automations, insights, and suggestions to employees, managers, and finance teams.
Brex is leading the conversation on AI in finance today and has been leveraging AI since inception across underwriting, fraud, receipt matching, merchant categorization, and other product areas. As such, we have established robust guardrails and protocols for ensuring the safety, privacy, and security of our customers.
How Brex reduces AI risk.
AI systems are designed to operate with a varying amount of autonomy. While the use of AI can provide monumental benefits in the way Brex and others continue to scale and deliver value to customers, the risks posed by AI systems are in many ways unique and may require a more nuanced risk review. Our goal is to enable Brexâs use of these systems in a manner that is both innovative and safe for our customers.
Guardrails we have implemented to enable the business to innovate safely include:
Technical security reviews
GRC Risk Assessments (new AI technologies and existing vendors with new AI features)
Legal and privacy contractual protections
AI at Work Usage Guidelines
AI Code of Ethics
AI Acceptable Use Policy
In addition to a technical security review via our Secure Product Lifecycle program and legal review for appropriate contractual protections, our GRC team performs risk assessments when evaluating new AI technologies and asks these questions:
1. Data source: Does the AI rely on querying datasets/sources of information and if so, what are those sources? How do you protect against data poisoning?
a. Why do we ask this?
Itâs important to understand exactly what data sources the technology is indexing or leveraging to provide answers and outputs. AI is not magic. It is based on data aggregation and algorithms, and changes to the data source can drastically affect the reliability and accuracy of the outputs. Bias can also be introduced, so it is also important to understand how change management and the prevention of data poisoning are handled for in-scope data sources and systems â to ensure the reliability and integrity of the data being used.
b. What risks are we trying to mitigate?
Legal risks (data privacy and confidentiality)
2. Data lifecycle: What happens when we input data into the AI engine? Where is the data stored? Who has access?
a. Why do we ask this?
Essentially, we want to know the lifecycle of the data as it moves through the AI system â are queries logged/saved? Is output logged/saved? If so, who at the AI vendor has access to the stored information, and what are their data retention policies? Is data access in accordance with the principle of least privilege? Is need-to-know enforced? This helps us understand if the risk of unauthorized access or data exposure is managed.
b. What risks are we trying to mitigate?
Legal risks (data privacy and confidentiality)
3. Data segregation: Is our data segregated from other companies' data?
a. Why do we ask this?
We ask this question to find out if each customer/partner of this AI technology has their own instance of the data set being used against the algorithms. This is preferred to avoid inadvertent mixing of data among customers. It also ensures that our particular companyâs policies can be applied, such as opting out of training the model and data retention requirements.
b. What risks are we trying to mitigate?
Legal risks (work product and output defects)
4. Model training: How often is the AI re-trained, and where does the AI reside? How does the retraining occur? Can we opt out of our data being used to train the model?
a. Why do we ask this?
As mentioned in the legal risk section above, AI technologies may contain errors, be misleading or deceptive, or be trained on data that is inaccurate to begin with. It is imperative to take steps to understand how and when the data is trained and how companies can opt out of having their data train the model. Understanding model training can help reduce the risk of bias and distortion in AI outputs, as well as reduce the risk of inadvertently incorporating intellectual property into the model for future querying.
b. What risks are we trying to mitigate?
Legal risks (intellectual property and output defects)
Looking to use AI?
At Brex, we have a cross-functional team that is dedicated to empowering our employees to innovate with the use of AI, but in a way thatâs focused on stewardship and safety. The use of AI can provide significant benefits, and our team has the following recommendations for other companies looking to leverage AI:
1. Establish a baseline understanding of what AI, LLM, etc., means in the context of your business, your products/services, and your targeted customer base. Understanding how your company plans to use AI, LLM, etc., and the benefits it provides will be key in understanding the business case and where investments in AI could be made, which will inform potential risk areas.
2. Develop governing policies and procedures. AI usage guideline docs and acceptable use policies can go a long way in keeping your data safe. These guidelines should focus on mitigating risk and should provide employees with general best practices that they need to follow when using AI.
3. Perform internal assessments, document the results and use them to improve your methodology over time.
Security: While there are significant privacy concerns with the use of AI, donât forget to evaluate the more granular technical controls that support privacy. Ask for security-related documentation, data flow diagrams, architecture-related documents, etc. Never assume that controls identified in a SOC 2 Type II report, for example, are replicated by default against new AI-related products and services.
Legal/Privacy: Legal can reduce the risks highlighted above by ensuring contractual protections, opt-out configurations, and data retention requirements are in place. Privacy best practices such as Fair Information Practice Principles(12) apply to AI products/services and should be taken into consideration.
Finance: The use of AI technology can come with a cost, especially at the enterprise level. Your finance department should evaluate the cost associated with the technology to ensure there is budget and alignment with business objectives.
IT: Your IT team should evaluate all new technology compatibility with the organizationâs environment to ensure standards such as authentication are met and duplicative tool functionality is avoided.
Embracing the potential of AIÂ
As you evaluate the potential risks and rewards of leveraging AI to improve your internal and operational processes and customer-facing innovations, we hope that our proven best practices, guidelines, and insights shared above will help you make faster, more confident decisions about which AI applications are right for your company. There is no doubt that AI technology should be embraced. However, we recommend starting by ensuring that youâre making appropriate investments in regulatory and compliance resources and evaluation, so that you can protect your business and your customers above all else while harnessing the transformative power of AI.
Sources
1 Attention Is All You Need, Google, 2017
2 Transformer Networks, Wikipedia
3 Thaler v. Perlmutter, Authorâs Alliance, 2023
4 Open Source Licenses: Types and Comparison, Synk.io
5 ChatGPT Lawyer, NY Times, 2023
6 Amazon scraps secret AI recruiting tool that showed bias against women, Reuters, 2018
7 Twitter taught Microsoftâs AI chatbot to be a racist asshole in less than a day, The Verge, 2023
8 EU AI Act: first regulation on artificial intelligence, Europarl, 2023
9 EU product safety legislation, 2001
10 The State of State AI Laws: 2023, Epic.org
11 FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology, FTC, 2021
12 Fair Information Practice Principles (FIPPs), FPC12
Unlock the power of automation.
The time is now to increase efficiency, compliance, and control with AI-powered spend management. Letâs talk.
Unlock the power of automation.
The time is now to increase efficiency, compliance, and control with AI-powered spend management. Letâs talk.