Indian Computer Emergency Response Team (CERT-In) has issued an advisory outlining best practices for using generative AI tools like ChatGPT, Gemini, and Grok safely. The advisory highlights the risks associated with AI applications, including data poisoning, adversarial attacks, and model stealing, while also providing guidelines to ensure responsible use of AI tools.
Risks involved while using AI tools
CERT-In warns that generative AI models are susceptible to various security threats. These include:
Data poisoning: Malicious actors can manipulate training data to mislead AI models, causing them to generate biased or incorrect outputs.
Adversarial attacks: Attackers can subtly modify inputs to trick AI into producing false responses.
Model inversion: Hackers may extract sensitive details from AI training data.
Model stealing: Threat actors can replicate AI models by continuously querying them.
Prompt injection: Malicious inputs can bypass content filters and exploit AI responses.
Hallucination exploitation: Attackers can misuse AI-generated misinformation to spread false narratives or scams.
Backdoor attacks: Hidden triggers within AI models can lead to unexpected, potentially harmful behaviors.
Best practices to follow while using ChatGPT, Gemini and other AI tools
| Best Practice | Description |
| Choose AI applications carefully | Avoid downloading AI apps from unverified sources to prevent malware infections. Use only trusted and organization-approved AI tools. |
| Avoid sharing sensitive information | AI platforms may collect user data to improve their models, posing privacy risks. Do not input confidential data, such as financial or personal details, into AI tools. |
| Configure AI access rights properly | Review access permissions for AI tools linked to business applications. Periodically check and update settings to prevent unauthorized data exposure. |
| Do not rely solely on AI for accuracy | AI models can produce inaccurate or biased results due to outdated or incomplete data. Cross-check AI-generated information with reliable sources before using it. |
| Use AI tools for their intended purposes | AI should assist with content creation and research but not make critical business, medical, or legal decisions. |
| Secure AI accounts and logins | Use strong passwords and enable two-factor authentication. Log out of AI services after use, especially on shared devices. |
| Maintain anonymity when possible | Consider using an anonymous account to protect personal identity and privacy. Remove sensitive details before submitting queries. |
| Avoid plagiarized content | AI-generated content may inadvertently reproduce copyrighted material. Verify originality before publishing AI-assisted outputs. |
| Stay alert for suspicious activity | Be cautious of AI-generated scams, deepfakes, and phishing attempts. Monitor AI interactions for potential misuse or security risks. |
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
