Why Use Python for AI Projects

Python is one of the most popular programming languages in AI development because it’s easy to read, widely supported, and works seamlessly with the OpenAI API. Using Python, you can send prompts (text instructions) to OpenAI’s language models like GPT‑3.5 or GPT‑4 and receive intelligent responses, all without needing advanced machine learning skills.

What You Need Before You Begin

If you’re missing any of these, review our guides on installing Python and getting your OpenAI API key.

Install Required Libraries

Open your terminal and run:

pip install openai python-dotenv

This installs:

Secure Your API Key with a .env File

Instead of hardcoding your API key, store it in a .env file in your project directory:

OPENAI_API_KEY=sk-abc123examplekey

Then load it securely in your script:

import os
from dotenv import load_dotenv

load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")

How Token Usage and Costs Work

OpenAI charges based on tokens. A token is a small chunk of text—about 4 characters on average. For example, “ChatGPT is great!” is about 5 tokens. Both your prompt and the AI’s response use tokens. You’re billed for the total number of tokens per request.

Note: Each message sent also has a small token overhead (about 3–4 tokens for formatting and metadata). Plan accordingly when using long conversations.

Sending a Prompt Using the OpenAI Client

OpenAI’s latest SDK uses a Client object. Here’s how to set it up and send a message to GPT‑3.5:

from openai import OpenAI

client = OpenAI(api_key=api_key)

def chat_once(user_input):
    try:
        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": user_input}
            ]
        )
        return response.choices[0].message.content
    except Exception as e:
        return f"Error: {e}"

This sends a message to the model and extracts the AI’s reply from the JSON response. Wrapping it in try/except prevents crashes if anything goes wrong.

Make It Interactive: Terminal Chatbot

Add this loop to allow real-time conversation:

def run_chat():
    print("Type 'exit' to quit.")
    while True:
        user_input = input("You: ").strip()
        if user_input.lower() == "exit":
            print("Session ended.")
            break
        response = chat_once(user_input)
        print("Bot:", response)

if __name__ == "__main__":
    run_chat()

Understanding the Response Structure

OpenAI responses are structured JSON objects. Here’s the breakdown:

response.choices[0].message.content

This accesses the first choice (the only one in most cases), inside which is a message object with the assistant’s reply.

Troubleshooting and Common Errors

What to Do Next

Once you’ve built this simple chatbot, expand it with:

Key Takeaways

FAQs

How do I avoid exposing my API key?

Keep it in a file named .env and load it with the dotenv library. Never share this file publicly.

Why is GPT-3.5 recommended for beginners?

GPT‑3.5‑turbo is faster and significantly cheaper than GPT‑4, making it ideal for experimenting with real AI projects without incurring high costs.

What’s the best way to inspect a response?

Use print(response) or print(response.model_dump_json(indent=2)) to view the full structured data and understand what’s available.

Keep Reading

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.