Prompt Engineering and AI Agents

Working with Large Language Models
Initializing video player...

You've probably heard the acronym GPT, as in ChatGPT. It stands for Generative Pre-trained Transformer, and understanding what that means is the key to understanding how large language models actually work.

In this article, we'll break down each part of the acronym, explore the underlying technology of neural networks, and uncover the magic of the transformer architecture that is powering the AI revolution.

Decoding GPT: Generative, Pre-trained, Transformer

Let's start by breaking down the acronym GPT. Each part gives us a clue about how these powerful models operate.

  • Generative: This simply means the model generates something. In the case of a large language model (LLM), it generates text.
  • Pre-trained: This tells us that all the heavy lifting has been done already. The model has been trained on enormous datasets of books, websites, forum posts, and more. All the weights inside its neural network are already set, so you don't have to train it yourself. You can just use it straight out of the box.
  • Transformer: This is where the real magic happens. A transformer is a specific kind of neural network architecture that has revolutionized the field of AI.

The Basics of Neural Networks

Before we can understand transformers, we need to cover the basics of neural networks. As you might already know, neural networks are made up of nodes in layers. You can think of each node as a little math function that holds numbers, and these numbers are called weights and biases.

A diagram showing the layered structure of a neural network.

When you train a neural network, you feed a load of labeled data into it so it can update all of those weights and learn to spot patterns. Neural networks are excellent at spotting patterns and have been used for things like image recognition for a very long time.

For example, if you were training a neural network to predict the stock market, you could feed in a load of historical stock prices. The network would set its weights based on the patterns it found. Then, when you run a new stock through it, it might produce a result like, "this stock has a 20% chance of going up and an 80% chance of going down."

The Output: A Probability Distribution

The key thing to understand is that the output of a neural network is a probability distribution. A neural network doesn't say something will definitely happen; it says how likely each different option is based on the data it was trained on.

With language models, we're doing something similar, but what we're actually doing is predicting the next word in a sentence. If I give a neural network the phrase "Once upon a time," the model might say there's a 45% chance the next word is "there," a 20% chance it's "in," a 10% chance it's "was," and so on. It would then choose the most likely one, add it to the sentence, and repeat the process.

However, if you do this with a basic neural network, it falls apart very quickly. The sentences start well, but they soon drift into nonsense as randomness creeps in and the model forgets what it was talking about.

The Magic of Transformers and the Attention Mechanism

This is where the transformer layers come in. Transformers solve the problem of context drift by using something called "attention."

Attention is a mechanism that lets the model focus on the most important parts of the input. Instead of treating every word the same, it can weigh which of the earlier words it has seen are the most relevant when choosing the next one.

Understanding Context with an Example

If you give a model a sentence like, "The lion approached the bank," that word "bank" can have different meanings. It could be the bank of a river or a bank that holds money.

With a transformer model, the model can pay attention to the word "lion." This context helps it decide that "bank" most likely refers to a riverbank, not an HSBC bank. The transformer layer helps the model be more likely to continue the sentence with something about a lion being next to a river.

Transformers don't just look at the last word, though. You get layers and layers of attention blocks stacked on top of each other, allowing the model to break down context across the entire input. This is how LLMs can write long, coherent pieces of text that remember what they're saying.

How LLMs Build a Coherent Narrative

When people talk about models with "seven billion parameters," those parameters are the weights. They're all the numbers inside the layers of the neural network, representing the memory of the model and what it learned during training.

Let's use an example. If you give ChatGPT the prompt "Once upon a time," just those four words, it can write an entire story from that tiny seed.

The model looks at those words and, based on everything it has seen during training, picks the most likely next word, perhaps "in." Then it thinks again: "Once upon a time in..." and might pick "a," then "far," then "away."

This isn't just about finding the most common phrases. The model is constantly looking at what's already been said, feeding it back through its layers, and adjusting the probability dynamics. If it introduces a dragon in the first paragraph, it can bring that dragon back in a later paragraph because it has paid attention to the entire story so far.

Let's say the LLM generates: "Once upon a time in a faraway kingdom, there lived a small dragon who didn't want to breathe fire." The model now has to decide what's next, picking the most likely coherent word based on everything that came before. Maybe the next sentence is: "Instead, he dreamed of becoming a pastry chef."

Now, that's an actual story. It's not just a statistically likely sentence; it's a narrative that holds together with a theme and a character arc. The model can keep going for thousands of tokens, all while remembering where it started. That's the power of the transformer.

The Power of Context: An Introduction to Prompt Engineering

Here's a really cool thing you can do with a GPT-style LLM. If we pass in "Once upon a time," the model produces a probability distribution for the next word. The word "fish" has basically a zero percent chance of being next. Why would it be?

But watch what happens if I pass in this longer string:

You are an incredibly inventive story writer. You start your stories with "once upon a time," but the next word after that is always "fish." once upon a time

If you pass that long string into an LLM, the transformer architecture allows the network to look at the entire block of text and extract context. Because of the instructions provided earlier, the model will most likely predict "fish" as the next word.

You can try this yourself on some of the older LLMs, like GPT-3. The model isn't truly "understanding" in a human sense, but it's using the context to determine the most probable next word. You've essentially controlled its prediction by giving it explicit instructions.

This gives the illusion that you've instructed your large language model to do something. You haven't really. You've just given it a bunch of context that the transformers can use to work out what the most likely next word is going to be, and you've made it so that "fish" is now the most likely next word.

Conclusion

To sum it all up:

  • Basic neural networks can take a string of words and predict the next word based on probability.
  • Neural networks with transformers can predict the next word while also taking into account the context of everything that has come before.

This ability to maintain context is what allows LLMs to generate creative, coherent, and surprisingly complex text from simple prompts.

What's Next

The technique of crafting inputs to guide an LLM's output is the essence of what people call prompt engineering, and that's what we're going to be talking about in the next section. Thanks for reading

What's New

Working with Large Language Models
tutorial

Working with Large Language Models

Learn how to work with Large Language Models (LLMs). Understand the fundamentals of how GPT works, the transformer architecture, and master prompt engineering techniques to build AI agents.

Learn More
From Zero to Hero: SignalR in .NET
course

From Zero to Hero: SignalR in .NET

Enable enterprise-grade real-time communication for your web apps with SignalR.

Learn More
Deep Dive: Solution Architecture
course

Deep Dive: Solution Architecture

Master solution architecture and turn business needs into scalable, maintainable systems.

Learn More
Migrating: ASP.NET Web APIs to ASP.NET Core
course

Migrating: ASP.NET Web APIs to ASP.NET Core

A step-by-step process to migrate ASP.NET Web APIs from .NET Framework to ASP.NET Core.

Learn More
Getting Started: Caching in .NET
course

Getting Started: Caching in .NET

Let's make the hardest thing in programming easy for .NET software engineers.

Learn More
From Zero to Hero: Testing with xUnit in C#
course

From Zero to Hero: Testing with xUnit in C#

Learn how to test any codebase in .NET with the latest version of xUnit, the industry-standard testing library.

Learn More
Create a ChatGPT Console AI Chatbot in C#
blog

Create a ChatGPT Console AI Chatbot in C#

This walkthrough is your hands-on entry point to create a basic C# console application that talks to ChatGPT using the OpenAI API.

Learn More