Understanding Messages And Roles in OpenAI API Calls

Working with Large Language Models
Initializing video player...

In a previous example, we used the official OpenAI NuGet package to get a chat application working quickly. In this article, we're going to take a step back and make that same call to OpenAI, but this time using classes that we create ourselves. We'll essentially be creating part of that NuGet package manually.

The reason for doing this is that when you're making API calls to an LLM service, it's important to know the actual structure of the JSON your application is sending. This knowledge becomes crucial later on when we discuss concepts like the Model Context Protocol (MTP), which is a JSON-based protocol for communicating with AI agents. We will spend this section looking at the structure of the message objects and writing C# classes for them so you can understand what they look like.

Understanding the OpenAI Message Structure

We've talked about how a language model is really just a completion model. You give it a chunk of text, and it predicts what should come next, over and over again. When we build a conversation with an LLM, we're giving it one long block of text and asking it to complete it. That completion then becomes the next part of the conversation.

Because LLMs are stateless, each time we add a new message, we send the entire conversation history back again for the next turn. Messages are a structured way of building that massive block of text.

If you remember from the OpenAI Playground, we can have a conversation by setting a system prompt and then exchanging messages as a user and an assistant.

OpenAI Playground showing a conversation with an AI assistant.

Inspecting the Raw JSON Payload

A cool feature of the OpenAI Playground is the ability to view the underlying code for the conversation. This shows you the actual JSON payload that's being sent to the API.

Viewing the raw JSON payload in the OpenAI Playground.

As you can see, the request contains the model name and a messages array. This array includes our system prompt, followed by the user and assistant messages in sequence. To continue the conversation, we would append our next user message to this array and send the entire thing back to the API. The API would then respond with a new assistant message. This corresponds to the message history we were creating in our C# example.

Building the C# Data Models

Now, let's create a C# console application that can talk to OpenAI by building these request and response objects from scratch. We'll start with a new .NET 10 console application. This time, we have not included the OpenAI NuGet package; we only have the dotenv.net package for managing environment variables.

Visual Studio Code showing the project dependencies without the OpenAI NuGet package.

We'll start by creating the data models needed to represent the JSON structure.

The ChatRole Enum

Every message sent to the LLM has a role property. This tells the LLM where the message came from, providing context. The common roles are system, user, and assistant.

Let's create a ChatRole.cs file for an enum to represent these roles.

C# code for the ChatRole enum in Visual Studio Code.

namespace RawJsonImplementation.Models;

public enum ChatRole
{
    User,
    Assistant,
    System,
    Tool
}

We won't be using the Tool role just yet, but it's one of the valid roles you can send to models like OpenAI's. These roles are fairly common across multiple AI providers.

The ChatMessage Class

Next, we'll create a ChatMessage class to represent a single message in the conversation. Each message will have a role and content.

C# code for the ChatMessage class in Visual Studio Code.

namespace RawJsonImplementation.Models;

public class ChatMessage
{
    public required ChatRole Role { get; set; }
    public string? Content { get; set; }
}

While the OpenAI API can handle more complex, multimodal content, a simple string property for Content is a compatible shortcut that works perfectly for text-based conversations.

The ChatRequest Class

The ChatRequest class will represent the entire payload we send to the OpenAI API. It needs to include the model we want to use and the list of all messages in the conversation history.

C# code for the ChatRequest class in Visual Studio Code.

namespace RawJsonImplementation.Models;

public class ChatRequest
{
    public string? Model { get; set; }
    public required List<ChatMessage> Messages { get; set; }
}

The ChatResponse Class

Finally, we need a class to represent the response from the API. OpenAI's API returns a list of choices, where each choice contains a message from the assistant.

C# code for the ChatResponse and Choice classes in Visual Studio Code.

namespace RawJsonImplementation.Models;

public class ChatResponse
{
    public required List<Choice> Choices { get; set; }
}

public class Choice
{
    public required ChatMessage Message { get; set; }
}

These four classes represent the bare-bones structure for making a chat completion call to the OpenAI API without tool calling.

Creating the OpenAIService

With our models in place, let's create a service to handle the communication with the OpenAI API. We'll create an OpenAIService class to encapsulate this logic.

Setting up HttpClient and JSON Serialization

Our service will need an HttpClient to make web requests. In the constructor, we'll configure it with the OpenAI API base address and the necessary authorization headers.

using System.Net.Http.Headers;
using System.Text;
using System.Text.Json;
using System.Text.Json.Serialization;
using RawJsonImplementation.Models;

namespace RawJsonImplementation.Services;

public class OpenAIService
{
    private readonly HttpClient _httpClient = new();

    public OpenAIService(string apiKey)
    {
        _httpClient.BaseAddress = new Uri("https://api.openai.com/v1/");
        _httpClient.DefaultRequestHeaders.Authorization = 
            new AuthenticationHeaderValue("Bearer", apiKey);
        _httpClient.DefaultRequestHeaders.Accept.Add(
            new MediaTypeWithQualityHeaderValue("application/json"));
    }
}

Since we are manually handling JSON, we also need to configure our serialization settings to match what the OpenAI API expects. Their API uses snake_case for property names, and enum values should be lowercase strings. We can configure this using JsonSerializerOptions.

C# code for configuring JSON serialization options.

// Inside the OpenAIService class
private static readonly JsonSerializerOptions _jsonOptions = new()
{
    PropertyNamingPolicy = JsonNamingPolicy.SnakeCaseLower,
    DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
};

public OpenAIService(string apiKey)
{
    // ... HttpClient configuration from above ...
    _jsonOptions.Converters.Add(new JsonStringEnumConverter(JsonNamingPolicy.CamelCase));
}

Implementing the CompleteChat Method

Now we'll create an async method, CompleteChat, that takes the list of messages, builds the request, sends it to the API, and processes the response. This method will replicate the functionality of the CompleteChat method from the OpenAI NuGet package.

First, we create a ChatRequest object and serialize it into a JSON string using our custom options.

C# code showing the creation and serialization of the ChatRequest object.

public async Task<ChatMessage> CompleteChat(List<ChatMessage> messages, CancellationToken cancellationToken = default)
{
    var openAIRequest = new ChatRequest
    {
        Model = "gpt-5-mini",
        Messages = messages,
    };

    var jsonRequest = JsonSerializer.Serialize(openAIRequest, _jsonOptions);
    using var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json");

    // ... continued below
}

Next, we make the POST request to the /chat/completions endpoint, read the response, and perform some basic error handling.

Making the asynchronous POST request to the OpenAI API.

// Inside the CompleteChat method...
try
{
    var response = await _httpClient.PostAsync("chat/completions", content, cancellationToken);
    var responseContent = await response.Content.ReadAsStringAsync(cancellationToken);

    if (!response.IsSuccessStatusCode)
    {
        throw new InvalidOperationException($"Error calling OpenAI API: {response.StatusCode} - {responseContent}");
    }

    // ... continued below
}
catch (HttpRequestException ex)
{
    throw new InvalidOperationException("Error calling OpenAI API", ex);
}

Finally, we deserialize the JSON response back into our ChatResponse object, extract the first message, and return it.

Deserializing the JSON response and extracting the message.

// Inside the try block of the CompleteChat method...

var result = JsonSerializer.Deserialize<ChatResponse>(responseContent, _jsonOptions)
    ?? throw new InvalidOperationException("Failed to deserialize OpenAI response.");

var firstChoice = result.Choices?.FirstOrDefault();
if (firstChoice == null || firstChoice.Message == null)
{
    throw new InvalidOperationException("No choices returned from OpenAI.");
}

return new ChatMessage
{
    Role = firstChoice.Message.Role,
    Content = firstChoice.Message.Content,
};

Putting It All Together in a Console App

With our service complete, we can now use it in Program.cs to create a simple chat loop, just like we did before.

The main program logic in Program.cs using the custom OpenAIService.

using RawJsonImplementation.Models;
using RawJsonImplementation.Services;

DotEnv.Load();
var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
if (string.IsNullOrEmpty(openAIKey))
{
    throw new InvalidOperationException("Missing OPENAI_API_KEY");
}

List<ChatMessage> messages =
[
    new ChatMessage
    {
        Role = ChatRole.Assistant,
        Content = "Hello, what do you want to do today?"
    }
];

Console.WriteLine(messages[0].Content);

var aiService = new OpenAIService(openAIKey);

while (true)
{
    Console.ForegroundColor = ConsoleColor.Blue;
    var input = Console.ReadLine();
    if (string.IsNullOrEmpty(input) || input.ToLower() == "exit")
    {
        break;
    }
    Console.ResetColor();

    messages.Add(new ChatMessage
    {
        Role = ChatRole.User,
        Content = input!
    });

    var response = await aiService.CompleteChat(messages);
    messages.Add(response);
    Console.WriteLine(messages.Last().Content);
}

If we run the application and set a breakpoint inside our CompleteChat method, we can inspect the objects and see the raw JSON being sent.

Debugging the application and inspecting the serialized JSON request.

After the API call, we can see the deserialized response, which contains the message from the assistant.

Debugging the application and inspecting the deserialized ChatResponse object.

The application works just as before, but now we have a much deeper understanding of the data structures being sent across the wire.

The running console application showing a successful conversation.

Conclusion & What's Next

This exercise gives you a really cool, manually created version of the same function that sends requests to OpenAI. It's important to become familiar with this process. I would encourage you to check out the source code and step through the debugger to inspect these classes yourself. Understanding the concept of building a messages array, setting roles, and sending the entire history with each request is fundamental to working with LLMs.

The next sections of this series will move on to proper, production-ready code. We're going to introduce a package from Microsoft called Microsoft.Extensions.AI, which has its own version of these message classes. We won't be using the classes we created here going forward, but knowing how they are structured is essential.

Thanks for reading!

Related Topics

  • Tool Calling: We'll revisit this JSON structure later when we explore how tools are defined and called within the API payload.
  • Production-Ready AI Apps: Explore how to use official libraries like Microsoft.Extensions.AI to build more robust and maintainable applications.
  • Multi-Modal Models: Learn how the content property can be expanded to include more than just text, such as images.

What's New

Working with Large Language Models
tutorial

Working with Large Language Models

Learn how to work with Large Language Models (LLMs). Understand the fundamentals of how GPT works, the transformer architecture, and master prompt engineering techniques to build AI agents.

Learn More
From Zero to Hero: SignalR in .NET
course

From Zero to Hero: SignalR in .NET

Enable enterprise-grade real-time communication for your web apps with SignalR.

Learn More
Deep Dive: Solution Architecture
course

Deep Dive: Solution Architecture

Master solution architecture and turn business needs into scalable, maintainable systems.

Learn More
Migrating: ASP.NET Web APIs to ASP.NET Core
course

Migrating: ASP.NET Web APIs to ASP.NET Core

A step-by-step process to migrate ASP.NET Web APIs from .NET Framework to ASP.NET Core.

Learn More
Getting Started: Caching in .NET
course

Getting Started: Caching in .NET

Let's make the hardest thing in programming easy for .NET software engineers.

Learn More
From Zero to Hero: Testing with xUnit in C#
course

From Zero to Hero: Testing with xUnit in C#

Learn how to test any codebase in .NET with the latest version of xUnit, the industry-standard testing library.

Learn More
Create a ChatGPT Console AI Chatbot in C#
blog

Create a ChatGPT Console AI Chatbot in C#

This walkthrough is your hands-on entry point to create a basic C# console application that talks to ChatGPT using the OpenAI API.

Learn More