Boost 4D with OpenAI!

Artificial Intelligence rapidly disrupts our daily lives, not only as developers. Most of you, if not all, have already heard about ChatGPT from OpenAI. This post is about a proof-of-concept made with Qodly Studio, in a practical use case: how to use AI to generate credible and realistic datasets for your apps. In other words: how to quickly fill your app with data for you to test it, or to demonstrate it. Get ready!

Who is who

ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) model that has been fine-tuned for natural language understanding and generation in a conversational context. It is designed to engage in text-based conversations with users, providing human-like responses and generating coherent and contextually relevant text. ChatGPT can be used for a wide range of applications, including chatbots, virtual assistants, customer support, and more, where interacting with users through natural language is required.

OpenAI API is the programming interface that allows developers to access and use the capabilities of ChatGPT and other models provided by OpenAI. It acts as a bridge between the model and developers’ applications, enabling them to send text prompts and receive model-generated responses.

What’s in for me?

Being a Rest API, it is quite easy to use it with 4D, and we’ll see that through an interesting use case. Indeed let’s suppose you work on a brand new app. You design your structure: data classes, attributes and relations. You design your screens and transactions, either through desktop screens or Qodly webforms. Now what’s missing? Data of course!

You do not always have csv or json file ready to import, nor time to collect and clean a relevant dataset. That’s where AI can be of some help. In this proof of concept, we show you what could be a simple yet useful and practical AI usage for you, as a developer.

Imagine you could simply say “give me french firstnames” or “give me typical timesheet comments” and have your database filled accordingly. Watch the video below to see more examples on how to produce useful data with an app made with Qodly Studio.

Getting curious? You can get the demo’s code here (4D v20 R2 minimum):

4D, Qodly and OpenAI

Feel free to play with it, improve it or adapt it to your cases. And don’t forget to contribute!

HOW DOES IT WORK?

In fact it is quite simple, nothing fancy. 

Apart from listing dataclasses and their attributes – which is quite common in 4D generic programming, the heart of this demo beats around querying OpenAI API.

This is done in a dedicated user class, in a function called queryOpenAI().

Function queryOpenAI() : Text
  var $url : Text
  var $headers; $data; $opts : Object
  var $request : 4D.HTTPRequest
	
  $url:="https://api.openai.com/v1/chat/completions"
  $headers:=New object("Authorization"; "Bearer "+This.apiKey; "Content-Type"; "application/json")
	
  $data:={}
  $data.model:="gpt-3.5-turbo"
  $data.messages:=This.messages.copy()
  $data.messages.push({role: "user"; content: This.userPrompt})
	
  $opts:={method: "POST"; headers: $headers; body: $data}
	
  $request:=4D.HTTPRequest.new($url; $opts)
  $request.wait()
	
  This.fetchStatusCode:=$request.response.status
	
  If (This.fetchStatusCode=200)
    return $request.response.body.choices[0].message.content
  Else 
    return ""
  End if 

If you are used to use the 4D.HTTPRequest class, this function has no secret for you (note that you have to use your own API Key after registration). No secret? Except for the properties This.messages and This.userPrompt, used to query OpenAI by placing them in the body of the request.

Well both these properties are where your creativity starts to play. Have a look at the class constructor to get the answer:

This.systemPrompt:="You are data generator. "
This.systemPrompt+="You will be provided with a description values to generate; and your task is to generate as many values as requested. "
This.systemPrompt+="Generated values must be separated by the character separator ¶. "
This.systemPrompt+="The list must start with 2 characters: ¶¶. "
This.systemPrompt+="The list must end with 2 characters: ¶¶. "
	
This.messages:=[]
This.messages.push({role: "system"; content: This.systemPrompt})
This.messages.push({role: "user"; content: "Generate a list of exactly 10 values for \"firstname\" of type Text."})
This.messages.push({role: "assistant"; content: "¶¶Alice¶Oliver¶Elsa¶Liam¶Maja¶Noah¶Ella¶Lucas¶Wilma¶Hugo¶¶"})
This.messages.push({role: "user"; content: "Generate a list of exactly 10 values for \"amount\" of type number."})
This.messages.push({role: "assistant"; content: "¶¶35¶64797¶101246¶3¶119¶4477¶647779¶357769¶94¶77¶¶"})
This.messages.push({role: "user"; content: "Generate a list of exactly 5 values for \"birthdate\" of type date."})
This.messages.push({role: "assistant"; content: "¶¶1980-10-05¶2035-05-02¶1995-12-15¶2022-10-14¶2011-05-23¶¶"})
	
This.userPrompt:="Generate a list of exactly "+String($quantity)+" values for \""+String($attributeName)+"\" of type "+This.attributeType+"."
This.userPrompt+=($remark#"") ? (" Remark: "+$remark) : ""
If ($attributeType="date")
  This.userPrompt+=". Date format: YYYY-MM-DD"
End if

You can see this set of lines as 3 parts:

  1. System prompt: here you set the context, the big picture. You point the model to the direction you want it to go. In this case, I write some kind of requirements.
  2. Preliminary conversation: then you can mimic a conversation between the user (you) and the assistant (the AI). Don’t forget that OpenAI is “nothing more” than generative model dedicated to providing responses coherent with their context. By writing yourself the beginning of a discussion, you increase your chances to get a stable answer. Here I simulate 3 questions and answers, to increase my chances to always get a well-formatted response, in the desired quantity.
  3. User prompt: this is the real question. A replica of the 3 previously simulated questions, but this time the prompt is initialized with what the user set in the demo UI.

As described above, System prompt, preliminary conversation and user prompt are all pushed in a collection $data, and submitted as the body of the HTTPRequest to OpenAI.

The rest is just classic string split.

Take aways

By watching the demo and playing yourself with it, you’ll realize that querying OpenAI can take time. You can check https://status.openai.com/ if you suspect a downtime. One aspect impacts a lot response times: the amount of tokens in the API response. The longer the responses is, the slower you’ll get it. But in a lot of cases, just letting the machine run, even for a while, is faster than gathering, cleaning and importing a relevant dataset.

There’s room for optimization for this demo, in many ways. OpenAI API offers a streaming mode that could improve your experience by allowing records generation in database to start much faster than it does currently.

We hope that such example will inspire you! Sneak into OpenAI API reference, it is full of interesting features and other use cases. Don’t hesitate to contribute and send us suggestions!

Avatar
• 4D Product Team Leader •Mathieu joined 4D in 2020 as Product Team Leader. His team is composed of Product Owners, the users voice of 4D. Working hand to hand with engineering team, their role involves prioritizing, scoping and verifying that new features will match 4D users expectations.Mathieu previously acted as projects director and team manager in various leading industries IT divisions - automotive, safety, advertising, specialized in international contexts and cloud oriented services.