Your Development Kickstart
Learn to make your first AI API call, set up authentication, and understand token-based pricing.
We'll cover the following...
We’ve learned about the potential of AI development, but now it’s time to bring it to life. In this lesson, we will write our first lines of code that talk to an AI, see it respond, and build something that we can actually use.
By the end of this lesson, we’ll have a very simple yet working example of how to work with AI.
How to obtain an OpenAI API key
Before we can interact with OpenAI’s AI models, we need to get permission. Think of an API key like a special password that identifies us as a legitimate developer. Here’s how to get yours.
Create your OpenAI account: Head to the official OpenAI site and sign up.
Navigate to API keys: Once logged in, find the “API Keys” section in your dashboard.
Create a new key: Click “Create new secret key” and give it a memorable name like “My First AI Project.”
Copy and save it immediately: This is crucial. You’ll only see the full key once!
Remember! Treat your API key like a password. Never share it, post it online, or hardcode it directly in your code. Professional developers use environment variables, and that’s what we’ll do, too.
Note: Your API key will not work until your OpenAI account has an active payment method or available credit. To enable your key, navigate to the “Billing” section in your OpenAI account settings, located directly below the “API Keys” tab, and add a payment method or credit card. Once that’s done, your API key will be ready to use in the exercises that follow.
On your device, open your terminal and run:
# On macOS or Linux, open your terminal and run:export OPENAI_API_KEY="your_actual_key_here"# On Windows, open PowerShell and run:setx OPENAI_API_KEY "your_actual_key_here"
To check if your key is set correctly, run:
echo $OPENAI_API_KEY
You should see your API key printed out. If you see nothing, try the export/setx command again.
Why environment variables? They keep your secrets secure and separate from your code. When you share your code with others or upload it to GitHub, your API key stays safely on your machine. OpenAI SDK will always search for the OPENAI_API_KEY environment variable.
How to install OpenAI library
Now, let’s install the tools that will let your Python code talk to OpenAI’s servers. The OpenAI Python library handles all the complex networking, authentication, and data formatting for you.
Open your terminal or command prompt and run:
pip install openai
This single command downloads and installs everything you need to build AI applications. Behind-the-scenes, you’re getting a powerful toolkit that handles API requests, response parsing, error handling, and much more. Also, we recommend using Python 3.11 or higher for the best compatibility. In this course, we’ll be using Python 3.13.
Let’s make sure that everything is installed correctly:
python -c "import openai; print('Ready to go!')"
If you see “Ready to go!” printed out, you’re all set. If you get an error, double-check that you have Python installed and try the pip command again.
The OpenAI library is your bridge to AI. It knows how to format your requests, send them securely to OpenAI’s servers, and translate the responses into Python objects that you can work with. Think of it as a translator that speaks both Python and AI.
How to use GPT models for output
Now, all we have to do is create a new file and add the following code:
Note: Please enter your OpenAI API key in the widget below. You’ll only need to do this once and it will remain available throughout the course.
from openai import OpenAIclient = OpenAI()response = client.responses.create(model="gpt-5",input="Write a one-sentence story about AI.")print(response.output_text)
If everything worked, you should see a creative story about AI appear on your screen. Congratulations! You just had your first AI conversation through code, rather than web applications like ChatGPT.
Let’s simplify what each line does.
Line 1: This brings the OpenAI toolkit into your program.
Line 2: This creates your connection to OpenAI’s servers. It automatically finds your API key from the environment variable.
Line 4: This is the method that sends your request to the AI.
Line 5: This tells OpenAI which AI model to use. GPT-5 is their most capable model. This is OpenAI’s primary function for generating AI responses. We can think of it as the bridge between your input and the AI’s output. When we call
responses.create(), we’re essentially sending a request to OpenAI’s servers with our text, images, or files as input, along with specifications about which model to use.
The method takes our input, whether it’s a simple string like “What is machine learning?” or a complex conversation array with multiple roles and processes it through the specified AI model. It then returns a structured response object containing the AI’s generated text. This is the fundamental building block that powers everything from simple Q and A interactions to sophisticated multi-turn conversations.
Note: GPT-5’s knowledge limit date is September 30, 2024. This means that if you ask a question about something that happened after that date, the model will not know the answer and may generate an incorrect response if it does not have access to web search tools.
Line 6: This is your message to the AI, basically what you want it to respond to.
Line 9: This extracts the AI’s actual response from all the technical details. If we print the
responseobject, it dumps the wholeresponseobject, which contains IDs, timestamps, model name, token counts, reasoning stubs, metadata, etc. Theresponse.output_textpulls out only the assistant’s message text, which is the human-readable sentence we actually care about.
Why don’t you change the input to ask for something different? Each time you run the code, you send a request across the internet to OpenAI’s powerful AI models and get back a unique, intelligent response.
Common hiccups and fixes:
Authentication error: Double-check that your API key is set correctly.
Model not found: Make sure you’re using a valid model.
Rate limit error: You’re making requests too quickly. Wait a moment and try again.
What are tokens?
Before we wrap up, there’s one crucial concept every AI developer must understand: tokens. This is the foundation of how AI models work, how much your applications cost, and why certain limits exist.
AI models don’t see words the way humans do. Instead, they simplify text into smaller units called tokens. Sometimes a token is a whole word, sometimes it’s part of a word, and sometimes it’s just punctuation. Here are some examples:
“Hello”= 1 token.
“world”= 1 token.
“ChatGPT”= 2 tokens (“Chat” + “GPT”).
“understanding”= 2 tokens (“understand” + “ing”).
“!”= 1 token.
Check the tokenizer below and paste some text into it. You’ll see exactly how your sentences are simplified into tokens. Try these examples:
Notice how the tokenizer highlights each token in different colors? This visual representation shows you exactly what the AI model “sees” when processing your text.
The reasons why tokens matter for you, are mentioned below.
Cost: You pay per token, not per word. Input tokens (what you send) and output tokens (what the AI responds with) have different prices. You can check the pricing for different models here.
Context limits: Models have maximum token limits (GPT-5 can handle around 400,000 tokens in a single conversation, while GPT-4o can handle around 256,000 tokens.)
Performance: More tokens mean longer processing time and higher costs.
You can always open your OpenAI dashboard and navigate to the “Usage” section to see:
How many tokens you used today/this month.
Cost overview by model and token type.
Your current rate limits.
Note: Modern LLMs ship with a specific, built-in tokenizer. The model’s embedding matrix is trained on that exact vocabulary and ID mapping, so we can’t swap tokenization at inference without breaking the weights.
Understanding tokens helps you build efficient applications that balance capability with cost. As you develop more complex AI systems, you’ll find yourself thinking in tokens, optimizing prompts, managing conversation length, and designing systems that make the most of every token.