• Courses
  • TutorialsFree
  • Learning Paths
  • Blogs
  • Authors
  • Dometrain Pro
  • Shopping Basket

    Your basket is empty

  • Business Portal
  • Making A Completion Request from C#

    Working with Large Language Models
    Initializing video player...

    In this section, we're going to actually be writing some code. We'll be making a basic LLM completion request from C# by building a bare-bones console application that uses the official OpenAI NuGet package. This will allow us to make a call to the OpenAI API to complete some text, similar to what we learned in previous sections.

    It's important to note that this is a throwaway example. Later on, you'll be using LLMs not just from OpenAI but also from providers like Google Gemini and Anthropic, and you will implement this functionality in a much more generic way. For now, let's just get started with a basic example using OpenAI.

    Setting Up Your Environment

    Before we can start writing code and calling the OpenAI API, we need to get a few prerequisites in order.

    Getting Your OpenAI API Key

    First things first, we're going to need an API key. To get one, head over to openai.com and sign up for an account. Once you're signed in, navigate to the API dashboard.

    On the left-hand side, you'll find a section for API keys. Click on that, and then click "Create new secret key." You can give your key a name and assign it to a project. This will generate a new API key for you to use in our application.

    Creating a new secret key in the OpenAI API dashboard.

    Creating the C# Console App

    Next, create a new console application. I've just created a basic .NET 10 console application in C#. At the moment, it just prints "Hello, World!" to the console.

    A basic C# console application that prints Hello, World!.

    If we run this application by hitting F5, we can confirm it works as expected.

    The console output showing 'Hello, World!'.

    Perfect. Now we need to get our OpenAI key into the application securely. You can find the demo code for this on GitHub, but one thing you won't find is my API key. I've stored it in a file called .env.

    To do this, create a new file named .env in the root of your project. Inside this file, add a single line with your API key:

    OPENAI_API_KEY=paste_your_api_key_here
    

    To ensure you don't accidentally commit this file to source control, it's a good practice to add .env to your .gitignore file.

    Adding the .env file to .gitignore to protect secrets.

    Installing Necessary NuGet Packages

    We'll need a couple of NuGet packages. The first is dotenv.net, which allows us to read environment variables from our .env file. The second is the OpenAI package itself.

    The OpenAI NuGet package page.

    Install these two packages using the .NET CLI:

    dotnet add package dotenv.net
    dotnet add package OpenAI
    

    After installing, your project file (.csproj) should look something like this:

    The .csproj file showing the dotenv.net and OpenAI package references.

    With the setup complete, we're ready to start coding.

    Writing the C# Chat Client

    Let's clear out the default "Hello, World!" code and start building our chat client.

    Loading the API Key Securely

    First, we need to add our using statements and load the API key from the .env file.

    using dotenv.net;
    using OpenAI.Chat;
    using OpenAI.Responses;
    
    DotEnv.Load();
    var openAiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
    
    if (openAiKey == null)
    {
        throw new InvalidOperationException("Missing OPENAI_API_KEY");
    }
    

    We'll use DotEnv.Load() to load the variables from our file. Then, we can retrieve the key using the static Environment.GetEnvironmentVariable method. We also add a simple check to throw an InvalidOperationException if the key is missing.

    C# code for loading the OpenAI API key from environment variables.

    Initializing the OpenAI Chat Client

    Now we can create an instance of the ChatClient. This client requires a model name and the API key. We'll use gpt-5-nano, which is a small model perfect for our basic tests.

    ChatClient client = new(model: "gpt-5-nano", openAiKey);
    

    Initializing the ChatClient with the model and API key.

    Managing the Chat History

    As I discussed in the last section, LLMs are stateless. They don't remember your chat history between requests. To maintain a conversation, we need to send the entire chat history with every API call.

    We'll manage this using a list of ChatMessage objects. We'll start the conversation with a message from the assistant to kick things off.

    List<ChatMessage> messages =
    [
        new AssistantChatMessage("Hello, what do you want to do today?")
    ];
    

    The ChatMessage class comes from the OpenAI.Chat namespace. As we talk to the LLM, we'll be adding UserChatMessage instances for our input and new AssistantChatMessage instances for the AI's responses.

    Creating a List of ChatMessage to store the conversation history.

    Creating the Chat Loop

    To create an interactive chat experience in the console, we'll use an infinite loop, often called an "agent loop" or "chat loop." This is one of the few times where a while (true) loop is appropriate. The loop will continuously wait for user input, send it to the AI, and display the response.

    Here's the complete structure of our Program.cs file:

    The complete C# code for the console chatbot application.

    using dotenv.net;
    using OpenAI.Chat;
    using OpenAI.Responses;
    
    DotEnv.Load();
    var openAiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
    
    if (openAiKey == null)
    {
        throw new InvalidOperationException("Missing OPENAI_API_KEY");
    }
    
    ChatClient client = new(model: "gpt-5-nano", openAiKey);
    
    List<ChatMessage> messages =
    [
        new AssistantChatMessage("Hello, what do you want to do today?")
    ];
    
    Console.WriteLine(messages[0].Content[0].Text);
    
    while (true)
    {
        Console.ForegroundColor = ConsoleColor.Blue;
        var input = Console.ReadLine();
        if (input == null || input.ToLower() == "exit")
        {
            break;
        }
        Console.ResetColor();
    
        messages.Add(new UserChatMessage(input));
    
        ChatCompletion completion = client.CompleteChat(messages);
    
        var response = completion.Content[0].Text;
    
        messages.Add(new AssistantChatMessage(response));
        Console.WriteLine(response);
    }
    

    Inside the loop, we:

    1. Set the console color and read a line of input from the user using Console.ReadLine.
    2. Check if the user typed "exit" to break the loop.
    3. Add the user's input to our messages list as a UserChatMessage.
    4. Call client.CompleteChat(messages), passing the entire conversation history. This makes the API call.
    5. Extract the text response from the completion object.
    6. Add the AI's response to our messages list as an AssistantChatMessage so it's included in the next request.
    7. Print the AI's response to the console.

    Running and Testing the Chatbot

    Let's run the application and see it in action. The console will display the initial message, "Hello, what do you want to do today?", and wait for your input.

    I'll start by asking it to "tell me a joke."

    The console application running and waiting for user input.

    When I hit enter, the code sends my request to OpenAI. By inspecting the messages list in the debugger, we can see it contains both the initial assistant message and my user message.

    Debugging the application and inspecting the messages list.

    The API call will happen synchronously (though an async version is available) and we'll get a completion object back. This object contains the response text as well as useful metadata.

    The AI's joke response displayed in the console.

    The AI responds with a classic: "Why don't scientists trust atoms? Because they make up everything." Honestly, I've spent a lot of time asking LLMs to tell me jokes, and they always tell you this exact same one.

    Now, let's continue the conversation. I'll say, "yes, tell me another one."

    Continuing the conversation with the chatbot.

    This time, when we hit the breakpoint, our messages array has four items: the initial prompt, my first question, the first joke, and my new request. The list is being built up with the entire history.

    To prove that the LLM is using this history, I'll ask it: "what was the first joke you told me about?"

    Asking the chatbot to recall previous information from the conversation.

    Because we passed in the entire conversation, it correctly answers: "The first joke was: 'Why don't scientists trust atoms? Because they make up everything.'"

    The chatbot correctly recalling the first joke.

    Remember, the LLM hasn't learned or remembered anything. It's completely stateless. It knows the context of our conversation only because we are passing the entire history in every single request.

    Understanding LLM Tokens

    While debugging the response from the API, you might notice a Usage property on the completion object. This contains information about tokens.

    Inspecting the token usage in the debugger.

    We haven't talked about tokens a huge amount, but in a nutshell, you can think of a token as a part of a word, kind of like a syllable. For example, the word "syllable" might be three tokens.

    Tokens are what OpenAI and other model providers use to measure usage. When they charge you for access to a model, they count the number of tokens in your input and the number of tokens generated in the output. When we say an LLM generates a story "word by word," it's actually generating it "token by token." It's a minor detail, but helpful to understand.

    Conclusion

    This simple console application is a really good entry point for learning how to build up conversation histories, send them off to an LLM, and get a response back. You can chat with this pretty much like you're talking to ChatGPT.

    Have a play around with this code, which you can find on GitHub. Try asking it to tell you a story or answer complex questions. In the next section, we'll start to do this a little bit more properly. Thanks for reading!

    What's Next

    • Exploring different LLM providers like Google Gemini and Anthropic.
    • Building a more generic and reusable client for interacting with LLMs.
    • Implementing asynchronous API calls for better performance.

    What's New

    What's new in C# 14
    blog

    What's new in C# 14

    This guide covers every new C# 14 feature, explains its benefits, and provides practical code examples to help you navigate how you can use them.

    Learn More
    Let's Build It: AI Chatbot with RAG in .NET Using Your Data
    course

    Let's Build It: AI Chatbot with RAG in .NET Using Your Data

    Build a Retrieval-Augmented Generation (RAG) chatbot that can answer questions using your data.

    Learn More
    Working with Large Language Models
    tutorial

    Working with Large Language Models

    Learn how to work with Large Language Models (LLMs). Understand the fundamentals of how GPT works, the transformer architecture, and master prompt engineering techniques to build AI agents.

    Learn More
    From Zero to Hero: SignalR in .NET
    course

    From Zero to Hero: SignalR in .NET

    Enable enterprise-grade real-time communication for your web apps with SignalR.

    Learn More
    Deep Dive: Solution Architecture
    course

    Deep Dive: Solution Architecture

    Master solution architecture and turn business needs into scalable, maintainable systems.

    Learn More
    Migrating: ASP.NET Web APIs to ASP.NET Core
    course

    Migrating: ASP.NET Web APIs to ASP.NET Core

    A step-by-step process to migrate ASP.NET Web APIs from .NET Framework to ASP.NET Core.

    Learn More
    Getting Started: Caching in .NET
    course

    Getting Started: Caching in .NET

    Let's make the hardest thing in programming easy for .NET software engineers.

    Learn More
    From Zero to Hero: Testing with xUnit in C#
    course

    From Zero to Hero: Testing with xUnit in C#

    Learn how to test any codebase in .NET with the latest version of xUnit, the industry-standard testing library.

    Learn More
    Create a ChatGPT Console AI Chatbot in C#
    blog

    Create a ChatGPT Console AI Chatbot in C#

    This walkthrough is your hands-on entry point to create a basic C# console application that talks to ChatGPT using the OpenAI API.

    Learn More