With ChatGPT and other generative AI making waves, we’ve been testing their limits.ChatGPTseemed unstoppable, but in late November, users observed a peculiar shift in ChatGPT-4, with the AI appearing “lazy” and evensuggesting users handle tasks on their own. OpenAI acknowledged the issue but couldn’t pinpoint the cause. Now, the company has rolled out an update with a potential fix for the problem.
The latestGPT-4 Turbo preview model, derived from the more widely available GPT-4 trained on information as recent as April 2023, tackles the previous bug (viaThe Verge). The fix is aimed at curbing instances of “laziness,” where the model wasn’t pulling its weight in completing tasks, as explained in a companyblog post. Since the GPT-4 Turbo launch, a bunch of ChatGPT users have noticed the AI chatbot getting a bit slack, particularly in handling coding tasks, compared to its earlier GPT-4 versions.

ChatGPT, the go-to tool for those who prefer delegating tasks at work, has gained 100 million weekly active users as of November 2023 (viaTechCrunch). Researchindicatesthat ChatGPT has played a role in enhancing efficiency for users, enabling them to deliver higher-quality work.
However, ChatGPT users found themselves in a bit of a pickle when the AI started playing the boss card, prompting OpenAI to step in and investigate. Over onReddit, users were airing their grievances about the struggle of coaxing ChatGPT into giving the right responses by trying out different prompts. For many users, trying to get the AI to play nice with coding tasks was the real headache, causing a fair share of complaints.
OpenAI hasn’t provided a statement on why this behavior shift happened, but its employees have admitted on social media that the problem is legit.
In other news from OpenAI’s blog, a fresh version of GPT-3.5 Turbo (gpt-3.5-turbo-0125) is out now. The company says it includes “various improvements,” including better accuracy when responding in requested formats and a nifty fix for a bug causing text encoding headaches in non-English language function calls.
On top of that, OpenAI is slashing prices for both the input and output of the model. The input costs are dropping by 50% to $0.0005 per thousand tokens in. Meanwhile, the output costs are getting a 25% cut, now standing at $0.0015 per thousand tokens out.
An MTEB benchmark demonstrating the ability to reduce the size of a text-embedding-3-large embedding to 256 while maintaining performance superior to an unshortened text-embedding-ada-002 embedding of 1536
Finally, OpenAI has unveiled new embedding models: text-embedding-3-small and text-embedding-3-large. These models transform your content into numerical sequences, making life easier for machine learning tasks such as clustering and retrieval.