In a previous example, we used the official OpenAI NuGet package to get a chat application working quickly. In this article, we're going to take a step back and make that same call to OpenAI, but this time using classes that we create ourselves. We'll essentially be creating part of that NuGet package manually.
The reason for doing this is that when you're making API calls to an LLM service, it's important to know the actual structure of the JSON your application is sending. This knowledge becomes crucial later on when we discuss concepts like the Model Context Protocol (MTP), which is a JSON-based protocol for communicating with AI agents. We will spend this section looking at the structure of the message objects and writing C# classes for them so you can understand what they look like.
We've talked about how a language model is really just a completion model. You give it a chunk of text, and it predicts what should come next, over and over again. When we build a conversation with an LLM, we're giving it one long block of text and asking it to complete it. That completion then becomes the next part of the conversation.
Because LLMs are stateless, each time we add a new message, we send the entire conversation history back again for the next turn. Messages are a structured way of building that massive block of text.
If you remember from the OpenAI Playground, we can have a conversation by setting a system prompt and then exchanging messages as a user and an assistant.

A cool feature of the OpenAI Playground is the ability to view the underlying code for the conversation. This shows you the actual JSON payload that's being sent to the API.

As you can see, the request contains the model name and a messages array. This array includes our system prompt, followed by the user and assistant messages in sequence. To continue the conversation, we would append our next user message to this array and send the entire thing back to the API. The API would then respond with a new assistant message. This corresponds to the message history we were creating in our C# example.
Now, let's create a C# console application that can talk to OpenAI by building these request and response objects from scratch. We'll start with a new .NET 10 console application. This time, we have not included the OpenAI NuGet package; we only have the dotenv.net package for managing environment variables.

We'll start by creating the data models needed to represent the JSON structure.
ChatRole EnumEvery message sent to the LLM has a role property. This tells the LLM where the message came from, providing context. The common roles are system, user, and assistant.
Let's create a ChatRole.cs file for an enum to represent these roles.

namespace RawJsonImplementation.Models;
public enum ChatRole
{
User,
Assistant,
System,
Tool
}
We won't be using the Tool role just yet, but it's one of the valid roles you can send to models like OpenAI's. These roles are fairly common across multiple AI providers.
ChatMessage ClassNext, we'll create a ChatMessage class to represent a single message in the conversation. Each message will have a role and content.

namespace RawJsonImplementation.Models;
public class ChatMessage
{
public required ChatRole Role { get; set; }
public string? Content { get; set; }
}
While the OpenAI API can handle more complex, multimodal content, a simple string property for Content is a compatible shortcut that works perfectly for text-based conversations.
ChatRequest ClassThe ChatRequest class will represent the entire payload we send to the OpenAI API. It needs to include the model we want to use and the list of all messages in the conversation history.

namespace RawJsonImplementation.Models;
public class ChatRequest
{
public string? Model { get; set; }
public required List<ChatMessage> Messages { get; set; }
}
ChatResponse ClassFinally, we need a class to represent the response from the API. OpenAI's API returns a list of choices, where each choice contains a message from the assistant.

namespace RawJsonImplementation.Models;
public class ChatResponse
{
public required List<Choice> Choices { get; set; }
}
public class Choice
{
public required ChatMessage Message { get; set; }
}
These four classes represent the bare-bones structure for making a chat completion call to the OpenAI API without tool calling.
OpenAIServiceWith our models in place, let's create a service to handle the communication with the OpenAI API. We'll create an OpenAIService class to encapsulate this logic.
HttpClient and JSON SerializationOur service will need an HttpClient to make web requests. In the constructor, we'll configure it with the OpenAI API base address and the necessary authorization headers.
using System.Net.Http.Headers;
using System.Text;
using System.Text.Json;
using System.Text.Json.Serialization;
using RawJsonImplementation.Models;
namespace RawJsonImplementation.Services;
public class OpenAIService
{
private readonly HttpClient _httpClient = new();
public OpenAIService(string apiKey)
{
_httpClient.BaseAddress = new Uri("https://api.openai.com/v1/");
_httpClient.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue("Bearer", apiKey);
_httpClient.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue("application/json"));
}
}
Since we are manually handling JSON, we also need to configure our serialization settings to match what the OpenAI API expects. Their API uses snake_case for property names, and enum values should be lowercase strings. We can configure this using JsonSerializerOptions.

// Inside the OpenAIService class
private static readonly JsonSerializerOptions _jsonOptions = new()
{
PropertyNamingPolicy = JsonNamingPolicy.SnakeCaseLower,
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
};
public OpenAIService(string apiKey)
{
// ... HttpClient configuration from above ...
_jsonOptions.Converters.Add(new JsonStringEnumConverter(JsonNamingPolicy.CamelCase));
}
CompleteChat MethodNow we'll create an async method, CompleteChat, that takes the list of messages, builds the request, sends it to the API, and processes the response. This method will replicate the functionality of the CompleteChat method from the OpenAI NuGet package.
First, we create a ChatRequest object and serialize it into a JSON string using our custom options.

public async Task<ChatMessage> CompleteChat(List<ChatMessage> messages, CancellationToken cancellationToken = default)
{
var openAIRequest = new ChatRequest
{
Model = "gpt-5-mini",
Messages = messages,
};
var jsonRequest = JsonSerializer.Serialize(openAIRequest, _jsonOptions);
using var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json");
// ... continued below
}
Next, we make the POST request to the /chat/completions endpoint, read the response, and perform some basic error handling.

// Inside the CompleteChat method...
try
{
var response = await _httpClient.PostAsync("chat/completions", content, cancellationToken);
var responseContent = await response.Content.ReadAsStringAsync(cancellationToken);
if (!response.IsSuccessStatusCode)
{
throw new InvalidOperationException($"Error calling OpenAI API: {response.StatusCode} - {responseContent}");
}
// ... continued below
}
catch (HttpRequestException ex)
{
throw new InvalidOperationException("Error calling OpenAI API", ex);
}
Finally, we deserialize the JSON response back into our ChatResponse object, extract the first message, and return it.

// Inside the try block of the CompleteChat method...
var result = JsonSerializer.Deserialize<ChatResponse>(responseContent, _jsonOptions)
?? throw new InvalidOperationException("Failed to deserialize OpenAI response.");
var firstChoice = result.Choices?.FirstOrDefault();
if (firstChoice == null || firstChoice.Message == null)
{
throw new InvalidOperationException("No choices returned from OpenAI.");
}
return new ChatMessage
{
Role = firstChoice.Message.Role,
Content = firstChoice.Message.Content,
};
With our service complete, we can now use it in Program.cs to create a simple chat loop, just like we did before.

using RawJsonImplementation.Models;
using RawJsonImplementation.Services;
DotEnv.Load();
var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
if (string.IsNullOrEmpty(openAIKey))
{
throw new InvalidOperationException("Missing OPENAI_API_KEY");
}
List<ChatMessage> messages =
[
new ChatMessage
{
Role = ChatRole.Assistant,
Content = "Hello, what do you want to do today?"
}
];
Console.WriteLine(messages[0].Content);
var aiService = new OpenAIService(openAIKey);
while (true)
{
Console.ForegroundColor = ConsoleColor.Blue;
var input = Console.ReadLine();
if (string.IsNullOrEmpty(input) || input.ToLower() == "exit")
{
break;
}
Console.ResetColor();
messages.Add(new ChatMessage
{
Role = ChatRole.User,
Content = input!
});
var response = await aiService.CompleteChat(messages);
messages.Add(response);
Console.WriteLine(messages.Last().Content);
}
If we run the application and set a breakpoint inside our CompleteChat method, we can inspect the objects and see the raw JSON being sent.

After the API call, we can see the deserialized response, which contains the message from the assistant.

The application works just as before, but now we have a much deeper understanding of the data structures being sent across the wire.

This exercise gives you a really cool, manually created version of the same function that sends requests to OpenAI. It's important to become familiar with this process. I would encourage you to check out the source code and step through the debugger to inspect these classes yourself. Understanding the concept of building a messages array, setting roles, and sending the entire history with each request is fundamental to working with LLMs.
The next sections of this series will move on to proper, production-ready code. We're going to introduce a package from Microsoft called Microsoft.Extensions.AI, which has its own version of these message classes. We won't be using the classes we created here going forward, but knowing how they are structured is essential.
Thanks for reading!
Microsoft.Extensions.AI to build more robust and maintainable applications.content property can be expanded to include more than just text, such as images.© 2025 Dometrain. All rights reserved.