Back to School Sale! 🎉
Dometrain Pro 20% off with code BTS2025! (Excl. Business and VS Pro)
00
Days
00
Hrs
00
Min
00
Sec
Get Pro 20% off!

Create a ChatGPT Console AI Chatbot in C#

9/23/2025

Large Language Models (LLMs) are everywhere right now - chatbots, copilots, content generation, code assistants.

This walkthrough is your hands-on entry point: we’ll create a basic C# console application that talks to ChatGPT using the OpenAI API.

By the end, you’ll have a working chatbot in your terminal - and a foundation for more advanced things later.

⚠️ Quick, upfront note: API access to the OpenAI LLMs is a paid service. It will require some billing details and an initial deposit. The example we will go through will cost less than a couple of cents (depending on how much chatting you do with your bot). However, if you'd prefer a completely free example, stick around to the end and we will also go through a setup using a local LLM provided by Ollama.

What You’ll Learn

By following this tutorial, you’ll be able to:

  • Create and run your first C# console app from scratch.
  • Securely load secrets with a .env file and environment variables.
  • Connect your program to ChatGPT through the OpenAI API.
  • Build a simple chat loop that preserves conversation history.
  • Understand why LLMs are stateless and how to handle context.
  • Try a free alternative with Ollama for local experiments.

Step 1: Get an API key from OpenAI

First, sign up for an account at openai.com.
Once you’re in the dashboard:

  1. Find the API Keys section on the left: platform.openai.com/api-keys
  2. Click Create new secret key.
  3. Give it a name/project.
  4. Copy the key somewhere safe.

This is the credential your app will use to talk to the API.

OpenAI portal

create a new key

save your key

generated key

Step 2: Create a console application

dotnet new console -o ChatDemo
cd ChatDemo
dotnet run

You should see the classic Hello, World! output.

hello world

Step 3: Hide your API key in a .env file

Create a .env file in the root of your project:

OpenAI_API_Key=sk-123abc...

If you are using source control for your project, it is good practice to add this .env to .gitignore to exclude it from being committed and pushed to a source repository.

env file

We’re going to use the dotenv.net package to load the data from this env file into our project.

Step 4: Install the NuGet packages

dotnet add package dotenv.net
dotnet add package OpenAI

Step 5: Connect Your C# App to ChatGPT

Replace the contents of Program.cs with:

using dotenv.net;
using OpenAI.Chat;

DotEnv.Load();

var apiKey = Environment.GetEnvironmentVariable("OpenAI_API_Key");
if (string.IsNullOrEmpty(apiKey))
    throw new InvalidOperationException("Missing OpenAI_API_Key in environment variables.");

var client = new ChatClient("gpt-5-nano", apiKey);

var messages = new List<ChatMessage>
{
    new AssistantChatMessage("Hello! What do you want to do today?")
};

Console.WriteLine(messages[0].Content[0].Text);

while (true)
{
    Console.ForegroundColor = ConsoleColor.Blue;
    var input = Console.ReadLine();

    if (input == null || input.Trim().ToLower() == "exit")
        break;

    Console.ResetColor();

    messages.Add(new UserChatMessage(input));

    ChatCompletion completion = await client.CompleteChatAsync(messages);
    var response = completion.Content[0].Text;

    messages.Add(new AssistantChatMessage(response));

    Console.WriteLine(response);
}

Run it with:

dotnet run

You’ll see the assistant’s opening message, then you can type into the console and ChatGPT will reply. Type exit to quit.

ChatGPT Run Output

Note: if you get a 429 error at this point, it is likely due to either billing not being enabled or insufficient credit on your OpenAI account.

429 error

Step 6: Understanding How the Chat Loop Works

Let’s walk through the important parts of Program.cs so you know what each piece does.

1. Loading your API key

DotEnv.Load();
var apiKey = Environment.GetEnvironmentVariable("OpenAI_API_Key");

We load the .env file and pull the API key into the program. This keeps secrets out of your code.

2. Creating the client

var client = new ChatClient("gpt-5-nano", apiKey);

This is the object that sends your requests to OpenAI and gets responses back. We selected the smaller ‘gpt-5-nano’ model, and we’re using our API key to communicate.

3. Seeding the conversation

var messages = new List<ChatMessage>
{
    new AssistantChatMessage("Hello! What do you want to do today?")
};

We start with an initial assistant message. The messages list will grow over time and represents the full conversation.

4. The loop

while (true)
{
    // read user input
    // send to ChatGPT
    // get a response back
    // print the response
}

This is the chat loop. It runs until you type exit.

5. Preserving history (stateless models)

messages.Add(new UserChatMessage(input));
ChatCompletion completion = await client.CompleteChatAsync(messages);

Every time you call the model, you send the entire conversation so far. LLMs don’t “remember” past calls.

Think of it like functional programming:

  • A pure function has no internal state.
  • Every call takes input and produces output.
  • No hidden memory between calls.

LLMs are the same. If you don’t send the prior messages, the model has no idea what you talked about before.

That’s why in our loop, we keep a growing messages list and pass it on every request.

What we just did: reinforced that ChatGPT isn’t storing your chat history. You are managing it in your program.

6. Storing the response

var response = completion.Content[0].Text;
messages.Add(new AssistantChatMessage(response));

We save ChatGPT’s answer back into the history so it can influence the next reply.

What we just did: turned our simple while loop into a true chat - by preserving the conversation and resending it each time.

Wrapping up (for the OpenAI version)

By now you’ve got:

  • An OpenAI API key stored safely in .env.
  • A C# console application calling ChatGPT.
  • A basic agent loop where you can type questions and get responses.
  • An understanding of statelessness and why you must send the conversation each time.

Bonus: Using Ollama (Free Local Model)

Don’t want to pay for API calls? You can run a model locally with Ollama.

Step A: Install Ollama

  • macOS/Linux:
    curl -fsSL https://ollama.com/install.sh | sh
    
  • Windows: download and run the installer.

Then check:

ollama --version

If the command isn’t found, it means Ollama isn’t installed or not in your PATH.

Step B: Pull a model

ollama pull llama3

ollama pull

This downloads Llama 3 (8B) - about 4.7 GB.

Step C: Create a new console application

dotnet new console -o ChatDemoOllama
cd ChatDemoOllama

Step D: Install NuGet package

dotnet add package OllamaSharp

Step E: Write the chatbot

Replace Program.cs with the below. We are using the same structure but swapping out with the equivalent objects from the OllamaSharp package.

using OllamaSharp;
using OllamaSharp.Models.Chat;

var ollama = new OllamaApiClient(new Uri("http://localhost:11434"), "llama3");

// Conversation history
var history = new List<Message>
{
    new Message { Role = "assistant", Content = "Hello! What do you want to do today?" }
};

Console.WriteLine(history[0].Content);

while (true)
{
    Console.ForegroundColor = ConsoleColor.Blue;
    var input = Console.ReadLine();

    if (input == null || input.Trim().ToLower() == "exit")
        break;

    Console.ResetColor();

    history.Add(new Message { Role = "user", Content = input });

    var request = new ChatRequest { Messages = history };

    Message? lastAssistantMessage = null;

    await foreach (var response in ollama.ChatAsync(request))
    {
        if (lastAssistantMessage == null)
        {
            lastAssistantMessage = new Message { Role = "assistant", Content = "" };
            history.Add(lastAssistantMessage);
        }

        lastAssistantMessage.Content += response.Message?.Content;
        Console.Write(response.Message?.Content);
    }

    Console.WriteLine();
}

Run it:

dotnet run

ollama output

Now you’ve got a chatbot with no API key, no billing, no internet dependency - just local inference.

Next Steps

What you’ve built here is just the beginning. Chatbots are fun, but the real power comes when you start thinking in terms of agents - AI systems that can reason, take actions, and work with external tools.

In our Getting Started: AI Agents in C#, you’ll go beyond simple conversations to learn how to:

  • Design intelligent agents that follow reasoning patterns like ReAct.
  • Connect your C# code to multiple AI services - OpenAI, Ollama, Azure, HuggingFace, and more.
  • Give your agents tools and functions so they can take real actions, not just generate text.
  • Understand memory and context management so your agents can keep track of information across sessions.
  • Build practical AI systems for coding assistance, data analysis, customer support, and even game logic.
  • Create and deploy your own MCP (Model Context Protocol) server in C# to integrate agents into larger systems.

👉 If you’re ready to move from a simple chatbot loop to full-scale AI agents in C#, the course is where the fun really starts.

About the Author

author_img

Nick Chapsas

Nick Chapsas is a .NET & C# content creator, educator and a Microsoft MVP for Developer Technologies with years of experience in Software Engineering and Engineering Management.

He has worked for some of the biggest companies in the world, building systems that served millions of users and tens of thousands of requests per second.

Nick creates free content on YouTube and is the host of the Keep Coding Podcast.

More courses by Nick Chapsas