Openai's New Ai Reinforcement Fine-tuning Could Transform How Scientists Use Its Models
The second day of OpenAI's 12 Days of OpenAI shifted to less spectacular, more enterprise interests compared to the general rollout of the OpenAI o1 model to ChatGPT on day one.
Instead, OpenAI announced plans to release Reinforcement Fine-Tuning (RFT), a way to customize its AI models for developers who want to adapt OpenAI's algorithms for specific kinds of tasks, especially more complex ones. This release marks a clear shift toward enterprise applications compared to day one’s consumer-focused updates. You can think of RFT as a method for improving how AI models work through their reasoning for responses. Using a dataset and evaluation rubric from a developer lets OpenAI’s platform train their specialized AI without lots of expensive reinforcement from later experiences.
RFT could be a boon for AI tools employed in law and science. OpenAI highlighted in its live stream the CoCounsel AI assistant built with RFT by Thompson Reuters and how RFT helps researchers studying rare genetic diseases at Berkeley Lab. However, the business partnerships aren't going to make much difference in the short term for average users of ChatGPT or other OpenAI products.
today we are announcing reinforcement finetuning, which makes it really easy to create expert models in specific domains with very little training data.livestream going now: https://t.co/ABHFV8NiKcalpha program starting now, launching publicly in q1December 6, 2024
Enterprise or consumer
If you're more keen on the consumer side of things, don't give up just yet. While the enterprise tilt contrasts with day one, it's easy to imagine OpenAI wanting to have as broad a range of news during the 12 days as possible. There will almost certainly be plenty more consumer news to come. Perhaps alternating days or some other pattern.
Still, at least the ending joke from OpenAI was a little funnier than yesterday. The AI described how self-driving vehicles are popular in San Fransisco, and Santa is keen to make a self-driving sleigh as part of the trend. The problem is that it keeps hitting trees. What's the problem? He didn't pine-tune his models. Maybe the image ChatGPT made for TechRadar's Editor-at-Large Lance Ulanoff will sell the humor better.