Skip to main content

In The Kitchen With OpenAI: Prepare A Personalized AI Assistant Using ChatGPT

Businesses have sought ways to incorporate AI into their products for many years. In the past, achieving this goal required a significant significant investment in both time and money to create a model tailored to a specific business. As a result, AI solutions were only available for the companies that could afford it.

Now, with OpenAI’s ChatGPT—an AI language model optimized for conversational interactions—businesses have the ability to create their own AI via a recently released Application Programming Interface (API).

This API enables engineers to architect completely unique, personalized AI assistants through various configuration options for guiding the responses of ChatGPT’s base model. In this post, we will explore these options and examine how changes to them drive unique results.


With just a
few lines of code, we can connect to OpenAI’s GPT directly to send and receive messages along with our custom
configuration settings

The Ingredients


Prompt Engineering

Arguably the most powerful tool for creating a uniquely tailored yet valuable AI assistant is prompt engineering. Prompt engineering involves providing a series of instructions to ChatGPT, dictating its behavior and guiding its responses. Instructions can be specific or as broad as what the situation demands, allowing for a multitude of possibilities to create the ideal assistant. In our testing, we verified that a user will receive a much more valuable response if the AI is prompted with it’s expected expertise before a question is asked.

For example, if the AI is being created to be used as a marketing assistant for an e-commerce site, it should be prompted to respond as a marketing expert in that specific field before any questions are asked to encourage the most valuable response. By doing so, the AI is gaining context as to what the user’s needs are. Continue reading to learn more about the other configuration options.



To address the challenge of consistency and variability, and to emulate the human process of critical analysis and restructuring thoughts, ChatGPT introduced the concept of temperature. Temperature can be thought of as a sliding scale ranging from 0 to 2, where 0 produces more consistent answers, while 2 produces responses with higher variability when asked the same question repeatedly.

Adjusting the temperature allows for tuning the desired responses based on the persona being created. For example, a logically driven AI who’s purpose is to analyze data might require a lower temperature than an assistant who’s purpose is to aid in creating a new marketing campaign.


One of the key strengths of ChatGPT is its ability to store key details within a given conversation. This means that the AI can retain the context given in previous statements and refer back to them later.

With the ability to “remember” important information within a given context, humans can leverage one of our greatest achievements—collaboration. For the first time, communication with technology can emulate a think-tank environment to allow humans to be more effective, original and creative than ever before.

Token Length

Assigning a token length grants engineers control over the level of conciseness or verbosity of the AI assistant’s responses.

This configuration also requires tuning and adjustment based on the specific use case. Using the same example as above, our analyst may be more concise, whereas the marketing expert might be more descriptive in its explanation.

API Integrations

In addition to the aforementioned configurations, OpenAI also allows engineers to integrate other APIs into ChatGPT, providing it with additional context and knowledge to better assist the user.

For instance, engineers can use the Google location API to create a logistics expert, enabling ChatGPT to optimize  routes and reduce costs.



To demonstrate and test GPT’s abilities, we created a simple chat UI to interact with, that allows a user to configure a series of options to generate different responses.


The Marketing Expert

In this test, we created an assistant who is an expert in the field for which we are asking for help. By creating an AI that has expertise in the field for which we are inquiring, we can expect valuable output. Here we see the assistant is attune to the fact that it does not have enough information to give a valuable answer. As a result, it asks follow up questions to create the best response possible. This is the behavior we would expect to see from a highly performing assistant.

The Engineer

In this test, we created an assistant who is an expert in a different field as compared to which filed we are asking for help in. The expectation here is that the assistant will attempt to give a helpful response, but it will not be as accurate as the previous example. In this response, we can see the assistant does its best to give a helpful response, but is lacking the contextual prompt to know how to be more helpful.


With these powerful tools at their disposal, engineers can mix and match these options to completely unique personas to assist with a wide range of real-world scenarios, specifically tailored to solve real problems. To learn more about OpenAI and how Large Language Models (LLMs) work, feel free to read my post on another one of their products, GitHub Copilot. Thank you for reading!

Ready to get started?
Contact Hypercolor Digital