Foundations
A broad term for computer systems that can perform tasks which would typically require human intelligence — things like understanding language, recognising images, making decisions, and generating content. In business, AI almost always refers to software tools rather than robots or science fiction concepts.
A type of AI where the system learns patterns from data rather than following hard-coded rules. Instead of a programmer writing every instruction, the system is trained on large amounts of data and figures out the patterns itself. Most modern AI tools are built on machine learning.
In practice: A spam filter learns what spam looks like by studying millions of emails, rather than someone writing a rule for every possible spam message.
A more advanced form of machine learning that uses layers of processing (called neural networks) to handle complex tasks. Deep learning is what made recent AI breakthroughs possible — it's the technology behind tools that can understand language, generate images, and recognise speech.
A system loosely inspired by the human brain, made up of layers of connected nodes that process information. Data goes in one side, passes through multiple layers where it gets analysed and transformed, and a result comes out the other side. You don't need to understand how they work internally — just know that they're the engine behind most modern AI.
A set of rules or steps that a computer follows to complete a task or solve a problem. In AI, algorithms are the mathematical methods used to train models and make predictions. You don't need to understand the maths — just know that algorithms are the instructions that make AI work.
The area of AI focused on understanding, interpreting, and generating human language. It's what allows AI tools to read your emails, summarise documents, answer questions, and write content. Every AI chatbot and writing tool relies on NLP.
The information used to teach an AI model. Just as a person learns from experience, an AI model learns from data — text, images, numbers, or whatever it's being trained on. The quality and breadth of training data has a major impact on how good the model is. Poor or biased training data leads to poor or biased results.
Models & Tools
An AI model trained on vast amounts of text that can understand and generate human language. Tools like ChatGPT, Claude, and Gemini are all powered by LLMs. "Large" refers to the enormous amount of data they're trained on and the billions of parameters (internal settings) they use.
AI that creates new content — text, images, code, audio, video — rather than just analysing or categorising existing content. When someone uses an AI tool to write an email, generate a product image, or draft a report, that's generative AI at work. It's the category that most current business AI tools fall into.
A large, general-purpose AI model that serves as the base for many different applications. Companies build foundation models (like GPT, Claude, or Gemini) and then those models get used in hundreds of different tools and products. Think of it as the engine — different companies build different cars around the same engine.
AI models or software whose code is made publicly available for anyone to use or build on. In practice, many models called "open source" are more accurately "open-weight" — the model itself is available to download and run, but the licence may have restrictions. These models can give businesses more control over their data and reduce subscription costs, though they require more technical expertise to set up.
An AI model that can work with multiple types of input and output — not just text, but also images, audio, video, and files. Most major AI tools are now multimodal, meaning you can upload a photo and ask questions about it, or give it a document and have it summarised. This makes them far more versatile for business use.
An AI system that can perform multi-step tasks with minimal human supervision. Rather than responding to a single question, an agent can take a broader instruction — like "research these suppliers, compare pricing, and draft a shortlist" — and work through it step by step, using tools and making decisions along the way. Agents are already available in various forms and improving rapidly.
A style of AI where the system takes autonomous, multi-step actions rather than just answering questions. In an agentic workflow, AI might research a topic, draft a document, check it against your brand guidelines, and send it for review — all from a single instruction. The term describes the shift from AI as a question-and-answer tool to AI as a capable assistant that can carry out tasks independently.
A software application that simulates conversation with users. Older chatbots followed simple scripts and could only handle pre-defined questions. Modern AI-powered chatbots can understand natural language, handle complex queries, and generate helpful responses. Widely used in customer service, internal support, and on websites.
A term increasingly used for AI tools that work alongside you rather than replacing you — assisting with tasks while you remain in control. The concept is that AI handles the routine or time-consuming parts of a task while you make the decisions and provide the judgement. Several major products use this term in their branding.
An AI model designed to "think longer" before answering — working through a problem step by step rather than responding instantly. These models tend to perform better on complex tasks like maths, logic, coding, and multi-part analysis. They take longer to respond and cost more per query, but the results can be noticeably better for tasks that need careful reasoning.
The technology behind most AI image generation tools. It works by learning to gradually remove noise from random static until a clear image forms — guided by the text description you provide. You don't need to understand the mechanics, just know that this is how AI creates images from text prompts.
Working with AI
The input you give to an AI tool — a question, instruction, or description of what you want. The quality of your prompt directly affects the quality of the output: vague prompts give vague results, while specific, well-structured prompts give much better ones. Learning to write good prompts is one of the most practical skills for getting value from AI.
The practice of crafting and refining prompts to get better results from AI tools. This includes techniques like giving the AI a role ("act as a marketing manager"), providing context, specifying the format you want, and including examples of good output. It's not as technical as it sounds — it's closer to learning how to brief a colleague effectively.
Background instructions given to an AI tool that shape how it behaves across an entire conversation or application. Unlike a regular prompt (a single request), a system prompt sets the tone, rules, and context for everything the AI does. Businesses use them to customise AI for specific tasks — for example, telling a customer service bot what it should and shouldn't discuss.
The amount of text an AI model can consider at one time — essentially its working memory. Everything in a conversation, including your prompts, the AI's responses, and any uploaded documents, takes up space in the context window. Once it's full, the model starts losing track of earlier information. Larger context windows mean the AI can handle longer documents and more complex conversations.
The unit AI models use to process text. In English, a token is roughly three-quarters of a word — so 100 words is about 130 tokens. Tokens matter because they determine how much you can send to and receive from an AI tool, and because many AI services charge based on token usage.
A setting that controls how creative or predictable an AI model's responses are. Low temperature produces more focused, consistent output — good for factual tasks. High temperature produces more varied, creative output — better for brainstorming or creative writing. Some AI tools let you adjust this directly; others manage it behind the scenes.
A technique where you include a few examples of what you want in your prompt so the AI understands the pattern. Instead of just saying "write product descriptions," you'd provide two or three example descriptions and then ask it to write more in the same style. It's one of the simplest and most effective ways to improve AI output quality.
In practice: "Here are two customer email responses we've sent before: [examples]. Now write a response to this new enquiry in the same tone."
A prompting technique where you ask the AI to work through a problem step by step rather than jumping straight to an answer. This tends to produce more accurate and well-reasoned results, especially for complex questions. Simply adding "think through this step by step" to a prompt can noticeably improve the quality of the output.
Infrastructure
A way for software systems to talk to each other. When a business tool connects to an AI model to use its capabilities — like a CRM that drafts emails using AI, or an accounting tool that categorises transactions — it's using an API. You don't need to build APIs yourself, but understanding what they are helps you evaluate tools that say they "integrate via API."
Running software and storing data on remote servers accessed over the internet, rather than on your own computer. Almost all AI tools run in the cloud — when you use ChatGPT or Claude, the processing happens on powerful computers elsewhere and the results are sent back to you. This is why AI tools need an internet connection.
The processing power needed to train and run AI models. AI requires significant computing resources, which is why it's expensive to build and operate. When you hear that "AI costs are falling," it usually means that the compute needed to achieve the same results is becoming cheaper. For most businesses, compute costs are bundled into the subscription price of the tools you use.
A type of computer chip originally designed for rendering graphics, now widely used for AI because it can perform many calculations at once. GPUs are the primary hardware that AI models run on. You'll see them mentioned frequently in AI news because demand from AI companies has driven high competition for supply.
The process of using a trained AI model to generate a response or make a prediction. Training is when the model learns; inference is when it applies what it's learned. Every time you send a message to ChatGPT or Claude, the model is performing inference. Inference costs are a key factor in AI pricing — faster inference means quicker responses but typically costs more.
The delay between sending a request to an AI tool and receiving a response. Lower latency means faster responses. Latency matters most when AI is being used in real-time applications like customer-facing chatbots or live call analysis, where delays are noticeable and affect the experience.
Running software on your own hardware, in your own physical location, rather than in the cloud. Some businesses choose to run AI models on-premise for data privacy reasons — your data never leaves your building. This is more complex and expensive to set up but gives you full control over your data. Most small and medium businesses use cloud-based AI tools instead.
Running AI directly on a device (phone, laptop, sensor, camera) rather than sending data to the cloud for processing. This means faster responses and better privacy, since data doesn't leave the device. Examples include voice assistants that process commands locally, or security cameras with built-in object detection.
Business & Implementation
Taking a pre-trained AI model and training it further on a specific dataset to make it better at a particular task. For example, a general language model might be fine-tuned on your company's customer service conversations so it can respond in your brand's tone. Fine-tuning is more technical than standard AI usage and requires good-quality data, but it can make a generic tool feel purpose-built.
A method that gives an AI model access to a specific set of documents or data so it can reference them when generating responses. Instead of relying only on what it learned during training, the model retrieves relevant information from your documents first, then uses that to answer. This is how businesses create AI tools that can answer questions about their own products, policies, or internal knowledge.
In practice: A company connects their product documentation to an AI chatbot using RAG, so the chatbot can give accurate, specific answers about their products rather than generic responses.
A standard way to connect AI tools to your business software — calendars, CRMs, databases, file systems, and more. Think of it like USB for AI: a common plug that lets different tools and data sources work together without custom engineering for each one. MCP is supported by a growing number of AI tools and platforms, making it easier to build AI workflows that pull from your real business data.
A way AI converts text (or images, or other data) into numbers so it can measure how similar things are. This is the technology behind AI search, recommendations, and RAG — when you ask an AI chatbot a question about your documents, embeddings are what help it find the most relevant passages. You don't need to understand the maths, but knowing the concept helps explain how AI "finds" relevant information.
Connecting an AI model's responses to specific, verifiable sources of information — such as your documents, a database, or live web search. Grounding reduces hallucinations by giving the AI real data to reference rather than relying purely on what it learned during training. Many AI products now offer grounding as a built-in feature.
The ability to understand, use, and evaluate AI tools effectively. For businesses, AI literacy means your team knows what AI can and can't do, how to write good prompts, when to trust AI output, and when to check it. Building AI literacy across your organisation is one of the most practical steps toward getting real value from AI tools.
Using technology to perform tasks with minimal human involvement. AI automation goes beyond traditional automation (which follows fixed rules) by handling tasks that require judgement, interpretation, or decision-making. In business, AI automation might mean automatically categorising incoming emails, generating invoice summaries, or routing customer queries to the right department based on content.
A sequence of tasks or steps that make up a business process. In the context of AI, people talk about "AI-powered workflows" — where one or more steps in a process are handled or assisted by AI. The most effective AI implementations usually focus on improving specific workflows rather than trying to transform everything at once.
Connecting AI tools to your existing business software so they can work together. For example, connecting an AI tool to your CRM so it can draft follow-up emails, or linking it to your project management tool to generate status updates. Integration is what makes AI useful day-to-day rather than a separate tool you have to switch to.
When employees use AI tools without official approval or awareness from management. This can be common in some organisations — people find that AI helps them work faster and start using free tools on their own. The risk is that sensitive company data may be entered into AI tools without appropriate safeguards. Having a clear AI usage policy addresses this without discouraging adoption.
Becoming so dependent on one AI provider's tools, formats, or ecosystem that switching to an alternative becomes difficult or expensive. This can happen when your data, workflows, and processes are all built around a specific provider. Keeping your options open — by using standard formats and avoiding deep dependency on a single provider where practical — reduces this risk.
Software you access online and pay for via a subscription rather than installing it on your computer and buying it outright. Most AI tools for business are delivered as SaaS — you sign up, pay monthly or annually, and use them through a browser or app. This keeps upfront costs low and means the provider handles updates and maintenance.
Safety & Ethics
When an AI model generates information that is incorrect or entirely made up — but presents it confidently as fact. This happens because AI predicts what sounds right based on patterns, not because it understands truth. Hallucinations are improving with newer models but remain a core characteristic of the technology. Always review AI output before using it in important decisions or sharing it externally.
When an AI model produces outputs that are systematically unfair or skewed toward certain groups, perspectives, or outcomes. Bias usually comes from the training data — if the data the model learned from over-represents certain demographics or viewpoints, the model's outputs will reflect that. This matters for business decisions like recruitment screening, customer profiling, or content generation where fairness is important.
Rules, filters, and limitations built into AI tools to prevent harmful, inappropriate, or undesired outputs. Guardrails might stop an AI from generating offensive content, sharing confidential information, or making claims it shouldn't. For businesses deploying customer-facing AI, setting appropriate guardrails is an important part of implementation — you need the AI to be helpful without going off-script.
The effort to ensure AI systems behave in ways that are consistent with human values and intentions. In simple terms, it's about making sure AI does what we actually want it to do rather than finding unexpected shortcuts or producing harmful results. This is a major focus area for AI research companies and influences how the tools you use are designed and updated.
A security risk where someone crafts an input designed to override an AI tool's instructions or make it behave in unintended ways. For example, a customer might type something into your AI chatbot that tricks it into ignoring its guidelines. This is an important consideration for any business deploying customer-facing AI — proper guardrails and testing help reduce the risk.
How personal and business data is collected, stored, used, and protected when interacting with AI tools. When you enter information into an AI tool, that data may be stored, used to improve the model, or processed on servers in different countries. Understanding each tool's data policy and choosing business-tier accounts where available are essential steps for any business using AI.