OpenAI has done it again. On Tuesday, March 14th, 2023, the company released its latest artificial intelligence model, GPT-4, which comes with remarkable new capabilities that will revolutionize the way we interact with technology. According to OpenAI, GPT-4 has the ability to “see” and do taxes, something that its predecessor, chatGPT 3.5, was unable to do.
The announcement was released on the company’s Twitter handle and on its blog page.
GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.
The launch of GPT-4 has been highly anticipated, as the model is expected to be a major step forward in the field of artificial intelligence. OpenAI is known for its cutting-edge research and development of AI technology, and GPT-4 is no exception.
One of the most significant upgrades in GPT-4 is its ability to “reason” based on images users have uploaded. This is a significant development, as it means the AI can process visual data and understand it in a way that was not previously possible. This new feature has a wide range of applications, from identifying objects in images to analyzing medical images for diagnosis.
In addition to its image processing capabilities, GPT-4 has also shown remarkable problem-solving skills, thanks to its broader general knowledge and problem-solving abilities. OpenAI’s president, Greg Brockman, in a demonstrated video, showed how the technology could be trained to quickly answer tax-related questions, such as calculating a married couple’s standard deduction and total tax liability.
“This model is so good at mental math,” he said. “It has these broad capabilities that are so flexible.”
OpenAI has released a video on its website showing some of the impressive capabilities of GPT-4, including its ability to “reason” based on images, its problem-solving skills, and its ability to process natural language. The video also demonstrates how the model can be used to solve complex problems and answer questions in various fields.
“GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks,” OpenAI wrote on its website.
The new technology is not available for free, at least so far. OpenAI said people could try GPT-4 out on its subscription service, ChatGPT Plus, which costs $20 a month.
The launch of GPT-4 is a significant development in the field of artificial intelligence, and it has the potential to transform many aspects of our lives. From improving medical diagnosis to helping with tax calculations, GPT-4 is set to make a major impact. With its remarkable problem-solving skills and ability to “see” and reason based on images, GPT-4 is a remarkable technological achievement that will undoubtedly change the way we interact with technology.
The advancement of AI is also not without its own criticism, the pace of OpenAI’s releases ChatGPT and now GPT-4 has also caused concern, because the technology is untested, forcing abrupt changes in fields from education to the arts. The rapid public development of ChatGPT and other generative AI programs has prompted some ethicists and industry leaders to call for guardrails on the technology.
Sam Altman, OpenAI’s CEO, tweeted Monday that “we definitely need more regulation on ai.”
The company elaborated on GPT-4’s capabilities in a series of examples on its website: the ability to solve problems, such as scheduling a meeting among three busy people; scoring highly on tests, such as the uniform bar exam; and learning a user’s creative writing style.
But the company also acknowledged limitations, such as social biases and “hallucinations” that it knows more than it really does.
Sarah Myers West, the managing director of the AI Now Institute, a nonprofit group that studies the effects of AI on society, said releasing such systems to the public without oversight “is essentially experimenting in the wild.”
“We have clear evidence that generative AI systems routinely produce error-prone, derogatory, and discriminatory results,” she said in a text message. “We can’t just rely on company claims that they’ll find technical fixes for these complex problems.”