Using Azure OpenAI with Python: A Step-by-Step Guide

Elanthirayan
2 min read6 days ago

--

Azure OpenAI offers powerful tools for integrating AI capabilities into your applications. In this blog post, I’ll guide you through setting up and using Azure OpenAI with Python. We will walk through the code snippet provided, explaining each part to help you understand and implement it effectively.

Prerequisites

Before you start, make sure you have:

  1. An Azure account with OpenAI service enabled.
  2. The openai Python package installed. You can install it using pip:
pip install openai

Detailed Explanation

Imports and Setup

import os
from openai import AzureOpenAI
  • os module is used for interacting with the operating system. However, in this code snippet, it’s not explicitly used.
  • AzureOpenAI is imported from the openai library to interact with Azure's OpenAI service.

Configuration

endpoint = "Your_Endpoint"  # https://xyz.azure.com/ 
key = "Your_Key" # Your API key
model_name = "gpt-4o" # Your model name
  • endpoint: Replace "Your_Endpoint" with the endpoint URL of your Azure OpenAI resource. It should look something like https://<your-resource-name>.openai.azure.com/.
  • key: Replace "Your_Key" with your actual API key.
  • model_name: Specify the model you want to use. Here, "gpt-4o" is the chosen model. Make sure it matches the model name you have access to.

Creating the Client

client = AzureOpenAI(azure_endpoint=endpoint,api_version="2024-02-01",api_key=key)
  • An instance of AzureOpenAI is created with the specified endpoint, API version, and API key.

Making a Request

completion = client.chat.completions.create(
model=model_name,
messages=[{"role": "user",
"content": "What is AI?", # Your question can go here
},],
)
  • The client.chat.completions.create method is called to generate a completion.
  • model: The model you want to use (model_name).
  • messages: A list of message objects. Each message is a dictionary with role (user or assistant) and content(the text of the message).

Printing the Response

print(completion.choices[0].message.content)
  • This prints the JSON response from the API call. You can inspect this to see the AI’s reply to your query.

Complete code

import os
from openai import AzureOpenAI

endpoint = "Your_Endpoint" # https://xyz.azure.com/
key = "Your_Key" # Your API key
model_name = "gpt-4o" # Your model name

client = AzureOpenAI(
azure_endpoint=endpoint,
api_version="2024-02-01",
api_key=key
)

completion = client.chat.completions.create(
model=model_name,
messages=[
{
"role": "user",
"content": "What is AI?", # Your question can go here
},
],
)

print(completion.choices[0].message.content)

Running the Code

  1. Replace "Your_Endpoint" and "Your_Key" with your actual Azure endpoint and API key.
  2. Make sure the model name ("gpt-4o") is correct.
  3. Save the code to a Python file, for example, azure_openai_example.py.
  4. Execute the script:
python azure_openai_example.py

You should see the response from the OpenAI model printed to the console.

Conclusion

This code demonstrates how to set up and use the Azure OpenAI service with Python. You can customize the messageslist to ask different questions or have more complex interactions. This integration can be the foundation for building AI-driven features in your applications.

Feel free to experiment with different models and settings to explore the full capabilities of Azure OpenAI. Happy coding!

If you have any questions or need further assistance, feel free to ask in the comments below. Happy AI exploring!

--

--

Elanthirayan

Technology enthusiast with a passion for VR/AR, ML, and IoT. I am excited to explore and push the boundaries of what is possible with these tools/techniques.