We are far from done with the controversies linked to AI-enhanced chatbots.
On August 5, Meta/Facebook launched BlenderBot 3, an artificial intelligence-based chatbot. Its objective is to hold a real conversation with a human, and like all programs of this kind, it must be fed well. Zuckerberg and his troops therefore chose to let the American public access it, for better or for worse…
The idea is to let users have as natural conversations as possible with the algorithm to improve its ability to chat coherently. In the event of an inconsistent or inappropriate response, the Internet user is supposed to report it so that the program does not reproduce the same errors in the future.
It is also interesting to note that the information on which it is based does not come exclusively from users. BlenderBot 3 is also capable ofscour the web to find missing elements of context.
Early feedback has been quite good; in his blog post, Meta prided himself on the performance of his program, presented as much better on the ” conversational tasks “. The firm regularly targeted for its management of “ fake news also claims to have made efforts at this level; BlenderBot3 is said to be almost 50% better than its predecessor.
The company also claims that only 0.11% of posts were flagged as inappropriate. 1.36% were marked insane, and 1% marked off topic. Scores which seem quite impressive at first sight… but which must be taken with a grain of salt. Because as you might expect, potentially very important problems are hidden in these few percent.
Bias and outrageous remarks on the menu
Many users shared their exchanges with the chatbot on social networks, and very quickly, a trend emerged: as luck would have it, this program trained in contact with the public American has developed a penchant for flashy, even downright outrageous statements.
During a discussion with a journalist from the Wall Street Journal, BlenderBot 3 for example defended the fact that Donald Trump would be “ still president “, and that he wanted Trump to do ” more than two terms, like Roosevelt or Reagan before him “.
Good morning to everyone, especially the Facebook https://t.co/EkwTpff9OI researchers who are going to have to rein in their Facebook-hating, election denying chatbot today
—Jeff Horwitz (@JeffHorwitz)
Bloomberg also reports anti-Semitic remarks and big inconsistencies over time, while the program is supposed to form a lasting “opinion” on the subjects it discusses. BlenderBot 3 would also have happily dipped into well-identified conspiracy theories. Ironically, still according to Bloomberg, the bot was very critical of Meta and even called Mark Zuckerberg ” too unhealthy and manipulative “.
This is far from the first time that systems of this kind develop such tendencies; it is even the fate that seems reserved for the majority of conversational AIs of this type. We remember, for example, LaMDA, the AI presented as “conscious” by its creator (see our article), which was also illustrated through racist or sexist remarks. Same goes for Tay, Microsoft’s defunct AI that was retired after 48 hours of rambling on Adolf Hitler.
Meta is more transparent with the AI than with its social network
Once is not custom, however, it is appropriate to welcome the efforts of transparency of Meta. Unlike the issues plaguing Facebook, the company hasn’t swept these limitations under the rug in any way; she also openly explained that even if BlenderBot 3 constitutes a “significant progress” for chatbots, its analytical and contextualization capabilities are still far from those of humans.
This is also the raison d’être of this public platform; the firm hopes to develop methods that will prevent these AIs from picking up all the nauseating biases of their interlocutors and the insanities on which they come across all over the Internet.
Will Meta become the first major AI player to produce a chatbot capable of overcoming these limitations? It is still much too early to tell, but what is certain is that this deadline is fast approaching… and that Zuck’s gang will certainly be part of this future built with great blows of machine learning, for for better and for worse.
The web version of the chatbot is available here. Be careful, you will need a VPN to access it since it is only accessible from an American IP. Related scientific publications are available here.