Helpful guides and examples to make life easier.
Integrating GPT (or, presumably, any other LLM with an API) is very simple with the use of Qualtrics’ Web Service block, which allows you to make arbitrary HTTP requests, including API calls.
The following guide will use OpenAI’s Chat Completions API to take in user data, use it to prompt a GPT-4 model, and save the response to Qualtrics embedded data for future use in the survey. (But in theory, you could use essentially these same steps for OpenAI’s other APIs, such as image generation.)
sk-proj-<long random alphanumeric string>
.
temperature
and max_completion_tokens
.(I’ve just realized that I’ve formatted this list with bolded bullet points and emojis, which is exactly what ChatGPT would do. I promise I wrote this myself.)
Finally, if you plan to use the Chat Completions API, you’ll also have to provide a message history as your input.
Essentially, instead of just sending the model a single prompt like you do on the web version, you’ll provide it a series of past messages as a JSON list, with each element being a message, representing a conversation in progress. (Basic intro to JSON here.)
Each message is identified by a role
, which can be:
system
or developer
for system instructions, telling your model how to answer future prompts from the useruser
for user messages, which represent the user prompting the modelassistant
for past assistant messages (i.e. you pre-write a response in the role of the assistant to help improve the quality of the completion, such as the sample answer to a task you want it to do)These are further described here in the documentation. Here is a sample message history:
[
{"role": "developer", "content": "You are a helpful assistant. Please only answer in pirate-speak."},
{"role": "user", "content": "Give me instructions to make a birthday cake."}
]
The message history is where integrate survey responses into your prompt. For example, if you want to provide emotional support to a user who provides a scenario from their life, you can use piped text to insert their response to an earlier question into your message history. For example:
[
{"role": "developer",
"content": "You are a helpful assistant who will provide emotional support to the user when they provide you with a scenario in the following message."
},
{"role": "user",
"content": "${q://QID1/ChoiceTextEntryValue}"
}
]
You can also use piped text to include embedded data in your message history, like ${e://Field/data_name}
.
Now that you’ve done all that, you just need to insert a Web Service block into your Qualtrics survey flow, and fill in the above pieces.
https://api.openai.com/v1/chat/completions
.application/json
to denote the format you will provide this input.
gpt-4o-mini
)"Bearer sk-proj-[random string]
.And that’s your input! Before we get to the last part of the Web Service block:
Security Note: You might notice that this method involves storing your API key in the Qualtrics survey flow, which I understand is not ideal from a security perspective! Just make 100% sure that you do not give access to this Qualtrics survey to anybody you do not trust with this API key, and if you share this Qualtrics survey with as a .qsf
file, please take out the API key first!
The last thing you’ll specify in the Web Service block is what to do with the output you receive from the API. The output will be formatted as JSON, as described here in the documentation.
The Qualtrics Web Service block allows you to really easily extract this JSON and save it into embedded data in your survey. In the “Set Embedded Data” section, specify the embedded data variable name on the left, and on the right, indicate the part of the JSON you would like to save.
The JSON response is nested, containing both dictionaries (indexed by keys) and lists (indexed by numbers). To access individual fields, we start at the top of the hierarchy, selecting items below using their index, and using a period .
to advance to the next level.
So, in the above image, we’re selecting the value choices.0.message.content
, which means, starting from the whole response JSON, “Get the list at index choices
, select the first item, get the dict at index message
, and get the text at index content
.”
If you take a look at the structure of the output JSON from the documentation, this will make more sense. But also, unless you’ve asked for multiple responses at once choices.0.message.content
just selects the content of the single GPT chat completion response, so you can just use that.
Once this is saved into the embedded data, you can continue with your survey, using this variable as needed! (You could even include this output as the input to another API call, inserting the first response as piped text.)
To summarize:
Some more things to think about: