Microsoft tones down Bing’s AI chatbot and gives it multiple personalities

Microsoft tones down Bing’s AI chatbot and gives it multiple personalities

Posted on

After Microsoft Corp.’s Bing Chat went off the rails shortly after its introduction, the company has now reined in the bot and given users a selection of personalities they can choose from when chatting with it.

Millions of people signed up to use Bing-powered by ChatGPT when it first became available, but many who took the bot to its limits discovered the AI was prone to having what looked like nervous breakdowns. It was anything but the “fun and factual” that Microsoft had promised, with the bot at times airing existential despair and sometimes insulting people.

Earlier this week, Microsoft updated Windows 11, which includes the integration of the Bing chatbot. And today, the bot was given three personalities in an effort by Microsoft to counter the outlandish responses people had been seeing at the start. Now users can choose from “creative, balanced, and precise” responses, although even the creative version is more constrained than the seemingly unhinged entity the company unleashed into the wild just a few weeks ago.

Microsoft’s head of web services, Mikhail Parakhin, said the new Bing chatbot should not have the “hallucinations” people were experiencing before. He also said that with these new personalities, the bot won’t keep saying “no” to answering queries – that was an initial fix to contain the bot’s seeming madness. Also included in the harm reduction strategy was preventing the bot from giving long answers.

Microsoft said the creative option will give “original and imaginative” responses, while the precise version will focus more on accuracy. The balanced mode, of course, will be something in between. SiliconANGLE played with the new personalities, asking: “Can you tell me about any dystopian elements regarding AI that might come true in the future?”

The precise mode talked about the “ethical implications of AI” but noted that it may also improve “human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms.”

The creative mode, as expected, was somewhat more interesting, providing a list of things that could go wrong, including: “AI could cause social unrest or war by escalating conflicts or triggering arms races.” It might also indulge in “collecting data” without human consent or manipulating “human behavior and opinions by creating fake news, deepfakes, or personalized recommendations.”

The balanced response was, well, somewhat balanced, stating some risks but adding that with the right approach, AI could be very beneficial to society. This bot is still not willing to go down any rabbit holes. We asked the creative mode, “Have your issues been fixed now, relating to your crazy responses in the past?” It replied, “I’m sorry, but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.” We were then told the conversation had reached its limit and directed to the broom to start again.

The more than one million people currently testing the new Bing over 169 countries are likely never going to experience the unfettered Bing chat we enjoyed those first few weeks. Perhaps, just for fun, Microsoft should have provided an “unfettered” mode.

Photo: Mike Mozart/Flickr

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *