Skip to main content
BVDNETBVDNET
ServicesWorkLibraryAboutPricingBlogContact
Contact
  1. Home
  2. AI Woordenboek
  3. Core Concepts
  4. What Is a Prompt?
book-openCore Concepts
Beginner

What Is a Prompt?

The input text or instructions given to an LLM to generate a response

Also known as:
Instructie
Invoer
System Prompt
User Prompt
Prompt

A prompt is the input text or set of instructions given to a Large Language Model to generate a response. Prompts range from simple questions ("What is machine learning?") to elaborate multi-part instructions with system roles, examples, context documents, and output format specifications. The quality and structure of a prompt directly determines the quality of the LLM's output — the same model can produce mediocre or excellent results depending solely on how the prompt is crafted. Prompts are composed of several possible layers: a system prompt that defines the model's behavior and persona, user prompts containing the actual request, and optional context like retrieved documents or conversation history.

Why it matters

The prompt is the primary interface between humans and LLMs. Unlike traditional software where developers write code to specify exact behavior, LLMs are steered through natural language instructions — making prompt quality the single largest variable in output quality. A well-structured prompt with clear instructions, relevant examples, and explicit output format can improve task accuracy from 60% to 95% without changing anything about the underlying model. For businesses, this means that prompt optimization is often the highest-ROI activity when building AI applications: it costs nothing to improve a prompt, yet yields dramatic quality and cost improvements.

How it works

When you send a prompt to an LLM, the text is first tokenized into a sequence of token IDs. These tokens pass through the model's layers, and the model generates a response by predicting the most likely next token given the full input context. The prompt structure matters because different parts serve different functions: the system prompt (processed first) establishes behavioral constraints and persona; the context section provides reference information; and the user message contains the specific request. The model treats the entire prompt as a continuous token sequence and generates its response as a natural continuation. Longer, more detailed prompts consume more tokens and cost more, creating a natural tension between prompt comprehensiveness and efficiency.

Example

A legal tech company building a contract analysis tool discovers the difference prompt quality makes. Their initial prompt — "Analyze this contract and find risks" — produces vague, inconsistent outputs. The optimized version includes a system prompt defining the AI's role as a commercial contract analyst, specifies the exact risk categories to check (liability caps, indemnification clauses, termination triggers, IP assignment), provides two annotated examples of previous analyses, and defines the output format as a structured table with risk level, clause reference, and recommended action. The same model that produced unreliable output with the simple prompt now delivers consistent, lawyer-reviewed-quality analysis — with the only change being a better-structured 400-token prompt.

Sources

  1. OpenAI — Prompt Engineering Guide
    Web
  2. Anthropic — Prompt Engineering Overview
    Web
  3. Wikipedia

Need help implementing AI?

I can help you apply this concept to your business.

Get in touch

Related Concepts

Prompt Injection
An attack where malicious input manipulates an LLM into ignoring its instructions
Large Language Model (LLM)
A neural network trained on massive text data to understand and generate human-like language
Prompt Engineering
The systematic practice of designing effective prompts to get optimal results from LLMs
Few-Shot Prompting
Providing a few worked examples in the prompt to guide an LLM's behavior — typically improving accuracy by 20-30% over zero-shot
Zero-Shot Prompting
Asking an LLM to perform a task using only instructions and no examples — the fastest and cheapest prompting approach

AI Consulting

Need help understanding or implementing this concept?

Talk to an expert
Previous

Programmatic Tool Calling

Next

Prompt Caching

BVDNETBVDNET

Web development and AI automation. Done properly.

Company

  • About
  • Contact
  • FAQ

Resources

  • Services
  • Work
  • Library
  • Blog
  • Pricing

Connect

  • LinkedIn
  • GitHub
  • Twitter / X
  • Email

© 2026 BVDNET. All rights reserved.

Privacy Policy•Terms of Service•Cookie Policy