GOOGLE IS NOT ABOUT TO LET MICROSOFT OR ANYONE ELSE ATTACK ON ITS SEARCH KINGDOM WITHOUT A FIGHT. The company announced today that a chatbot named Bard will be available “in the coming weeks.” The launch appears to be in response to ChatGPT, the wildly popular artificial intelligence chatbot created by startup OpenAI with Microsoft funding.
Google CEO Sundar Pichai wrote in a blog post that Bard is already available to “trusted testers” and is intended to put the “breadth of the world’s knowledge” behind a conversational interface. It employs a scaled-down version of LaMDA, a powerful AI model first announced by Google in May 2021 and based on similar technology to ChatGPT. Google claims that this will enable it to provide More users should use the chatbot and provide feedback to help address issues with the quality and accuracy of the chatbot’s responses.
Both Google and OpenAI are basing their bots on text generation software, which, while eloquent, is prone to forgery and can mimic unsavoury styles of speech picked up online. The need to mitigate those flaws, as well as the fact that this type of software cannot easily be updated with new information, poses a challenge to hopes of building powerful and profitable new products on top of the technology, such as the suggestion that chatbots could reinvent web search.
Notably, Pichai did not reveal plans to incorporate Bard into the search box that drives Google’s profits. Instead, he demonstrated a novel and cautious application of the underlying AI technology to improve conventional search. Google will synthesise a response for questions with no single agreed-upon answer. That reflects the various points of view.
For example, the answer to the question “Is it easier to learn the piano or the guitar?” would be “Some say the piano is easier to learn because the finger and hand movements are more natural… Others claim that learning chords on the guitar is easier.” Pichai also stated that Google intends to make the underlying technology available to developers via an API, similar to what OpenAI is doing with ChatGPT, but provided no timeline.
Meet Bard,
The euphoria generated by ChatGPT has fueled speculation that Google is facing a serious challenge to its dominance in web search for the first time in years. Microsoft, which recently invested around $10 billion in OpenAI, will hold a media event tomorrow to discuss its collaboration with ChatGPT.
This is thought to be related to new features for the company’s second-place search engine, Bing. Shortly after Google’s announcement, OpenAI CEO Sam Altman tweeted a photo of himself with Microsoft CEO Satya Nadella.
ChatGPT, which was quietly launched by OpenAI last November, has quickly become an internet sensation. Many users envision a revolution in education, business, and daily life as a result of its ability to answer complex questions with apparent coherence and clarity. Some AI experts, however, advise caution, pointing out that the tool does not understand the information it serves up and is prone to making things up.
Some of Google’s AI experts may be particularly perplexed by the situation, as the company’s researchers developed some of the technology behind ChatGPT—a fact that Pichai alluded to in Google’s blog post. “Six years ago, we reoriented the company around AI,” Pichai wrote. “We’ve continued to invest in AI across the board since then.” He mentioned both Google’s AI research division and IBM. and work at DeepMind, a UK-based AI startup acquired by Google in 2014.
ChatGPT is built on GPT, a Google-invented AI model known as a transformer that takes a string of text and predicts what will happen next. OpenAI has gained notoriety for publicly demonstrating how feeding massive amounts of data into transformer models and increasing the computer power used to run them can result in systems capable of generating language or imagery. ChatGPT improves on GPT by allowing humans to provide feedback on various answers to another AI model, which then fine-tunes the output.
Google has, by its own admission, chosen to proceed cautiously when it comes to incorporating LaMDA technology into products. Aside from hallucinating incorrect information, AI models trained on Web text are prone to racial and gender biases.
Using derogatory language.
Google researchers highlighted these limitations in a 2020 draught research paper arguing for caution with text generation technology, which irritated some executives and led to the firing of two prominent ethical AI researchers, Timnit Gebru and Margaret Mitchell.
Other Google researchers who worked on the technology behind LaMDA were dissatisfied with Google’s hesitancy and left the company to start their own companies using the same technology. The introduction of ChatGPT appears to have prompted the company to shorten its timeline for incorporating text generation capabilities into its products.