The announcement from Meta that it is all set to release its latest Chabot on the web for allowing the general public to talk to the system, is surprising. The technology giant is looking at this move as a means to collect authentic feedback on the bot’s actual capacity.
The AI labs for research at Meta, have created this latest chatbot that it calls BlenderBot 3. The access to the same over the web, will be only for US residents for the time being. The bot is supposed to be able to generally chat up and answer typical digital assistant style queries that are specific and short.
The bot currently is a prototype and stands made on previous work from Meta in the large language model domain. The domain is commonly referred to as LLMS. This is a highly powerful yet flawed text-generation grade software. The most popular example in this context is OpenAI’s GPT-3.
Looking At the Technical Bits Around BlenderBot
BlenderBot is trained at the initial stage as per vast datasets for text that it mines in statistical patterns for generating a language. These systems are thought of as extremely flexible spans across a number of uses. The latter could include programmer code generation or assistive functions on offer to writers.
Now such models do engulf a number of flaws too. The training data often regurgitate biases and also invent answers to few questions. This poses to be a big problem when a user depends on the bot as a digital assistant.
This is why Meta is willing to run the feedback and operations test. A highly useful feature of the bot is the capacity it has, inbuilt for searching specific information on the web.
BlenderBot can cite even the minutest of sources. Kurt Shuster, engineer for research ag Meta states, “We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI…”
Is The Move Foolproof?
Well for starters, prototype releases for chatbots are not seen as a very good move. Earlier in 2016, Microsoft had released a Chabot named Tay to understand Twitter-based public interactions. The users on Twitter managed to coach Tay to regurgitate hurtful statements. The bot had to be pulled off in lesser than 24 hours time.
However, Meta believes that the world in the domain of AI has seen more changes since Tay. Mary Williamson, research engineering manager, Facebook AI Research, says, “Tay was designed to learn in real time from user interactions, BlenderBot is a static model. That means it’s capable of remembering what users say within a conversation but this data will only be used to improve the system further down the line.”