OpenAI has announced ChatGPT, a new AI chatbot that will replace the successful one from last year. At first, only people who pay for ChatGPT Plus and developers will be able to use the new GPT-4 language model, which is being called a big step up from the GPT-3.5 model that powers ChatGPT right now.
Today’s livestream focused on the rumored multimodal capabilities of GPT-4’s ChatGPT, which would allow the chatbot AI to handle text, images, and maybe even video inputs. However, only text and images were used as inputs.
During the live stream, we saw the AI chatbot being used as a Discord bot that could turn a hand-drawn image into a working website. At the moment, it is unknown if GPT-4 will also be able to output in different formats in the future. This is much better than just typing out some words.
Even though this live stream was mainly about the new GPT-4 API and how developers can use it, the shown features were still very cool. We saw how the GPT-4 model could be used to replace existing tax software, process image inputs, and build a website that works as a Discord bot. Here are our thoughts on the OpenAI GPT-4 Developer Livestream, as well as some news about AI.
Processes 8x the words of ChatGPT: According to OpenAI, the GPT-4 model can answer in up to 25,000 words, as opposed to the free version of ChatGPT, which can only answer in up to 3,000 words. The chatbot can now handle longer text strings and respond in a more natural manner. Because of this, it can sum up entire web pages or blog posts.
Availability rolling out soon: OpenAI’s product announcement says that ChatGPT Plus users and developers who use the ChatGPT API will be able to use GPT-4. It is not clear if or when ChatGPT’s free version will be upgraded to GPT-4.
Handles text and images: GPT-4, in contrast to the current version of ChatGPT, is able to process both text and image inputs. Microsoft hinted at an upcoming video input capability for OpenAI at a recent AI event, but the company has yet to demonstrate any such functionality.
We’ll talk about the OpenAI GPT-4 Developer Livestream here, so please join us. OpenAI has already announced the new GPT-4 model on its website as a new product, and it is now giving developers a live look at it.
The promises made at first are very appealing. OpenAI says that GPT-4 can read and understand up to 25,000 words of text. That’s a lot longer than the maximum number of characters that ChatGPT can handle, which is 3,000. But GPT-4’s multimodal features are a real improvement because they let the chatbot AI process images as well as text. Microsoft’s press conference this week has led to rumors that the company will soon be able to process videos.