OpenAI Working on Customisable Chatbot Upgrade to Address AI Biases
17 Feb. 2023
In today's world, artificial intelligence (AI) systems are being used to power services such as search engines and chatbots. However, there has been concern about potential biases in these systems that can lead to dangerous results. This week, Microsoft revealed it was improving its Bing search engine powered by OpenAI after some media outlets pointed out potentially dangerous answers from the technology. To address these concerns, OpenAI recently announced a customisation upgrade for its popular ChatGPT chatbot, allowing users to tailor the bot according to their views.
The new customisation feature for ChatGPT will enable users to personalise the AI system in line with their views and opinions. It is hoped that this will reduce any bias present within the technology and make sure it is not giving out incorrect or offensive answers. Meanwhile, other companies in the generative AI space also need to set guardrails for this nascent technology so that it works safely and effectively without causing any harm or disruption. As part of this process, Microsoft said they were learning from user feedback which was helping them improve Bing before a wider rollout could occur.
OpenAI has also taken steps towards mitigating political biases when developing ChatGPT but believes that allowing users more freedom with customising their bots would be beneficial in addressing different views and opinions too. The San Francisco-based startup released ChatGPT last November, which uses generative AI – a type of artificial intelligence capable of producing human-like responses – sparking considerable interest among tech enthusiasts worldwide.
In conclusion, OpenAI's announcement shows a commitment from both startups and larger corporations alike towards creating ethical solutions that accounts for potential bias within artificial intelligence systems like generative AI-powered chatbots or search engines like Microsoft’s Bing. With this kind of proactive approach towards preventing harm caused by algorithmic bias, we can ensure that these types of technologies remain both useful and safe for all users in the future.