Over the past couple of years, especially after the inception of ChatGPT, AI chatbots have grown tremendously to help users access information quickly. However, false or fabricated content is the main issue or tendency of such AI models, which presents hallucinations or generates incorrect information.
To rectify this issue, Microsoft has revealed ‘Correction,’ a tool that aims to automatically fix factually incorrect AI-generated text, via a new blog post. It is currently available as part of Microsoft’s Azure AI Content Safety API and can be used with any text-generating AI model, including Meta’s Llama and OpenAI’s GPT-4o. However, it is being tested as a preview model at the moment.
This feature will use the ‘Groundedness Detection’ feature, which according to Microsoft ‘identifies ungrounded or hallucinated content’. The tool will then fact-check the AI-generated information by comparing it with a correct source, which can either be a document or uploaded via transcripts.
Alongside the correction feature, Microsoft has also introduced a series of updates to increase the online security, safety, and privacy of AI systems. The company has expanded its Secure Future Initiative (SFI), with three core principles: secure by design, secure by default, and secure operations.
Microsoft has further unveiled a new Azure OpenAI Service Whisper model. This feature ensures that sensitive customer data remains secure, especially during the inference process. This makes it particularly beneficial for industries like financial firms, where data privacy is of utmost importance.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.