Why Use Python for AI Projects
Python is one of the most popular programming languages in AI development because it’s easy to read, widely supported, and works seamlessly with the OpenAI API. Using Python, you can send prompts (text instructions) to OpenAI’s language models like GPT‑3.5 or GPT‑4 and receive intelligent responses, all without needing advanced machine learning skills.
What You Need Before You Begin
- Python 3.7 or newer
- Basic command line knowledge (how to run
pip install) - An OpenAI account with an API key
- A code editor such as Visual Studio Code
If you’re missing any of these, review our guides on installing Python and getting your OpenAI API key.
Install Required Libraries
Open your terminal and run:
pip install openai python-dotenvThis installs:
openai: the latest official Python SDK (version 1.0+)python-dotenv: lets you load sensitive environment variables like your API key from a separate file
Secure Your API Key with a .env File
Instead of hardcoding your API key, store it in a .env file in your project directory:
OPENAI_API_KEY=sk-abc123examplekeyThen load it securely in your script:
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")How Token Usage and Costs Work
OpenAI charges based on tokens. A token is a small chunk of text—about 4 characters on average. For example, “ChatGPT is great!” is about 5 tokens. Both your prompt and the AI’s response use tokens. You’re billed for the total number of tokens per request.
Note: Each message sent also has a small token overhead (about 3–4 tokens for formatting and metadata). Plan accordingly when using long conversations.
Sending a Prompt Using the OpenAI Client
OpenAI’s latest SDK uses a Client object. Here’s how to set it up and send a message to GPT‑3.5:
from openai import OpenAI
client = OpenAI(api_key=api_key)
def chat_once(user_input):
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_input}
]
)
return response.choices[0].message.content
except Exception as e:
return f"Error: {e}"
This sends a message to the model and extracts the AI’s reply from the JSON response. Wrapping it in try/except prevents crashes if anything goes wrong.
Make It Interactive: Terminal Chatbot
Add this loop to allow real-time conversation:
def run_chat():
print("Type 'exit' to quit.")
while True:
user_input = input("You: ").strip()
if user_input.lower() == "exit":
print("Session ended.")
break
response = chat_once(user_input)
print("Bot:", response)
if __name__ == "__main__":
run_chat()
Understanding the Response Structure
OpenAI responses are structured JSON objects. Here’s the breakdown:
response.choices[0].message.contentThis accesses the first choice (the only one in most cases), inside which is a message object with the assistant’s reply.
Troubleshooting and Common Errors
- ModuleNotFoundError: Install the missing library with
pip install openai - InvalidRequestError: Check that the model name is correct (e.g.,
gpt-3.5-turbo) - AuthenticationError: Ensure your
.envfile is in the same directory and the key is valid - RateLimitError: You’re making requests too quickly—pause before sending another
- TimeoutError or ConnectionError: Likely a network issue; check your internet connection
What to Do Next
Once you’ve built this simple chatbot, expand it with:
- Prompt engineering techniques to write more effective inputs
- Deploying your AI chatbot online using Flask
- Fine‑tuning a model with your own examples
- RAG (Retrieval-Augmented Generation) to let your AI use custom documents
Key Takeaways
- Python and OpenAI’s SDK give beginners an easy way to build chatbots and explore AI projects.
- Use environment variables and the
dotenvpackage to keep your API key safe and separate from your code. - The updated OpenAI client requires new syntax with
client.chat.completions.create(), not the older global functions.
FAQs
How do I avoid exposing my API key?
Keep it in a file named .env and load it with the dotenv library. Never share this file publicly.
Why is GPT-3.5 recommended for beginners?
GPT‑3.5‑turbo is faster and significantly cheaper than GPT‑4, making it ideal for experimenting with real AI projects without incurring high costs.
What’s the best way to inspect a response?
Use print(response) or print(response.model_dump_json(indent=2)) to view the full structured data and understand what’s available.
Keep Reading
- Prompt Engineering for Beginners – Write more effective instructions for clearer, faster answers.
- Using Function Calling with OpenAI – Let your AI assistant run tools and calculations.
- Deploy Your AI Chatbot on the Web – Turn your chatbot into a usable web app.
- Fine‑Tune GPT on Your Data – Customize model behavior with example conversations.
- Build a RAG‑Powered Chatbot – Let your assistant answer using your files and knowledge base.
- Python vs JavaScript for OpenAI – Choose the best language for your skillset and goals.