Many companies have been long working on their Large Language Models but GPT-4 is still at the top. Other LLMs like Bard or Llama are good but not as good at ChatGPT. However, Apple believes their LLM ReALM is better than GPT-4 at speech and reference resolution.
Also, read – XDefiant Is Delayed And New Date Needs To Be “Locked In”
Apple ReALM LLM
Reference resolution is a language problem that involves determining what a specific expression refers to. For example, when we speak, we employ pronouns like “they” and “that.” A chatbot, such as ChatGPT, may not always understand what you’re saying.
Chatbots would benefit greatly from being able to grasp precisely what is being said. Users can refer to something on a screen with “that” or “it” or another word, and a chatbot will grasp the context.
This is Apple’s third AI-related publication in recent months. These papers could be viewed as an early preview of capabilities that the firm intends to add in iOS and macOS.
ReALM shows significant improvements over an existing system with comparable capability across several sorts of references, Apple says. The smallest model achieves absolute benefits of more than 5% for on-screen references. We also benchmark against GPT-3.5 and GPT-4, with our smallest model performing similarly to GPT-4 and our larger models significantly outperforming it,” said the researchers in their report.