Member-only story
Using Azure OpenAI with Python: A Step-by-Step Guide
Azure OpenAI offers powerful tools for integrating AI capabilities into your applications. In this blog post, I’ll guide you through setting up and using Azure OpenAI with Python. We will walk through the code snippet provided, explaining each part to help you understand and implement it effectively.
Prerequisites
Before you start, make sure you have:
- An Azure account with OpenAI service enabled.
- The
openai
Python package installed. You can install it using pip:
pip install openai
Detailed Explanation
Imports and Setup
import os
from openai import AzureOpenAI
os
module is used for interacting with the operating system. However, in this code snippet, it’s not explicitly used.AzureOpenAI
is imported from theopenai
library to interact with Azure's OpenAI service.
Configuration
endpoint = "Your_Endpoint" # https://xyz.azure.com/
key = "Your_Key" # Your API key
model_name = "gpt-4o" # Your model name
endpoint
: Replace"Your_Endpoint"
with the endpoint URL of your Azure OpenAI resource. It should look something likehttps://<your-resource-name>.openai.azure.com/
.key
: Replace"Your_Key"
with your actual API key.model_name
: Specify the model you want to use. Here,"gpt-4o"
is the chosen model. Make sure it matches the model name you have access to.
Creating the Client
client = AzureOpenAI(azure_endpoint=endpoint,api_version="2024-02-01",api_key=key)
- An instance of
AzureOpenAI
is created with the specified endpoint, API version, and API key.
Making a Request
completion = client.chat.completions.create(
model=model_name,
messages=[{"role": "user",
"content": "What is AI?", # Your question can go here
},],
)
- The
client.chat.completions.create
method is called to generate a completion. model
: The model you want to use (model_name
).messages
: A list of message objects. Each message is a dictionary withrole
(user or assistant) andcontent
(the text of the message).