(CTN NEWS) – OpenAI’s patrons can now integrate personalized data into the streamlined iteration of GPT-3.5, namely GPT-3.5 Turbo. This enhancement streamlines the process of enhancing the reliability of the text-generation AI model while instilling distinct behaviors.
OpenAI asserts that meticulously calibrated renditions of GPT-3.5 possess the capacity to rival, and in certain instances, surpass the foundational capabilities of GPT-4, the company’s premier model, for specific focused assignments.
In a blog post published today, the company conveyed, “Ever since the debut of GPT-3.5 Turbo, developers and enterprises have expressed the desire to adapt the model to craft unparalleled and unique interactions for their users.
This update empowers developers to personalize models that exhibit enhanced performance for their individual scenarios, and to implement these tailored models on a large scale.”
Through the process of meticulous customization, firms employing GPT-3.5 Turbo via OpenAI’s API can refine the model’s adherence to instructions, such as ensuring it consistently responds in a designated language.
We've just launched fine-tuning for GPT-3.5 Turbo! Fine-tuning lets you train the model on your company's data and run it at scale. Early tests have shown that fine-tuned GPT-3.5 Turbo can match or exceed GPT-4 on narrow tasks: https://t.co/VaageW9Kaw pic.twitter.com/nndOyxS2xs
— OpenAI (@OpenAI) August 22, 2023
Enhancing Model Responses through Fine-Tuning: Customization and Efficiency
Alternately, they can ameliorate the model’s aptitude for maintaining consistent response formats, such as for completing segments of code.
Furthermore, they can fine-tune the model’s “essence” in terms of its output, encompassing its style and demeanor, to align more harmoniously with a particular brand or voice.
Furthermore, the process of fine-tuning empowers OpenAI’s clientele to condense their text prompts, expediting API requests and curbing expenses.
OpenAI asserts in their blog post that “early testers have managed to reduce prompt size by up to 90% through the integration of fine-tuned instructions directly into the model.”
Presently, fine-tuning necessitates data preparation, the uploading of essential files, and the establishment of a fine-tuning task via OpenAI’s API.
The entirety of the fine-tuning data undergoes evaluation through a “moderation” API and a moderation system bolstered by GPT-4 to determine compliance with OpenAI’s safety criteria, as stated by the company.
Nonetheless, OpenAI has intentions to unveil a user interface for fine-tuning in the future, which will encompass a dashboard for monitoring the progress of ongoing fine-tuning endeavors.
Costs and Updates in OpenAI’s GPT-3.5 Fine-Tuning Landscape
The costs associated with fine-tuning are as follows:
- Training: $0.008 per 1K tokens
- Input usage: $0.012 per 1K tokens
- Output usage: $0.016 per 1K tokens
In this context, “tokens” refer to raw text components, such as “fan,” “tas,” and “tic” forming the word “fantastic.”
OpenAI indicates that conducting a fine-tuning task for GPT-3.5 Turbo involving a training file of 100,000 tokens, equivalent to around 75,000 words, would entail an expense of approximately $2.40.
In parallel news, OpenAI has now unveiled two updated GPT-3 base models, specifically labeled as babbage-002 and davinci-002. These models are also eligible for fine-tuning and come with support for pagination and “enhanced extensibility.”
As previously announced, the original GPT-3 base models will be retired on January 4, 2024.
OpenAI further stated that they intend to introduce fine-tuning capabilities for GPT-4, which distinguishes itself from GPT-3.5 by its ability to comprehend images alongside text, in the upcoming fall season. However, specific details regarding this release were not provided.
RELATED CTN NEWS: