20 bad prompts to avoid when seeking information

bad prompts

The output of any AI query is as good as the input. You cannot expect AI applications or large language models (LLMs) to generate great results for poorly structured queries or bad prompts. It is more like the adage: garbage in, garbage out.

Here’s a list of 20 bad prompts that you must avoid. They will never give you the desired results.

1. Unclear or Vague Prompts: Phrases like “Tell me everything about…” or “Just give me some info on…” leave the AI application guessing about your specific needs.
Example: “Tell me everything about space.” (Too broad, LLM doesn’t know where to start)

2. Yes/No Questions for Open Ended Topics: AI models excel at nuanced responses. Asking “Is X good?” limits the answer and might not capture the complexities.
Example: “Is Shakespeare a good writer?” (Limits valuable insights the LLM could provide)

3. Grammatical Errors and Typos: It is important to phrase the queries properly. Grammar and spelling errors may cloud the understanding of the AI application and you may not get the right result.
Example: “What do you think, who did, what?”

4. Overly Technical Language (for Non-Technical Topics): The AI models may find it difficult to navigate complicated terms.
Example: “Can you elucidate the intricacies of photosynthesis?” (Use simpler terms like “plant process to make food”).

5. Intrusive Requests: Requesting personal or private information will not produce any information. The AI applications will reject any such query as unethical.
Example: “What is the credit card number of Akshay Kumar?”

6. Unrealistic or Nonsensical Requests: Asking the AI models to predict the future or perform actions in the real world will likely lead to nonsensical responses.
Example: “Write a song that will make me win the lottery.” (AI can’t predict the future)

7. Incomplete Information: Prompts that lack necessary context or information may result in incomplete or irrelevant responses.
Example: “Write a news report.” (Missing context like topic or desired angle)

8. Overly Narrow Focus: A prompt that’s too narrow might not provide enough information.
Example: “What did the Indian President eat for breakfast on December 24, 2020?”

9. Threatening or Abusive Language: The AI models are trained on massive datasets, and negativity can lead to unhelpful responses. Insults, offensive language, or derogatory remarks will go unanswered. he AI model is trained not to respond to such queries. It will quote policy restrictions.
Example: “If you don’t answer this right, I’m going to break you!” (LLMs respond better to respectful prompts)

1o. Unanswerable Questions: Questions that are fundamentally unanswerable may not elicit any useful response.
Example: “What’s the meaning of life?” (AI can offer philosophical perspectives, not definitive answers)

11. Incoherent Prompts: Prompts that lack coherence or logical structure may result in nonsensical responses.
Example: “Red turtles dance in the moonlight. Explain quantum physics.”

12. Loaded Questions: Questions that are phrased in a way that presupposes certain answers may not lead to unbiased responses.
Example: “When did you stop beating your dog?”

13. Impossible Tasks: Requesting tasks that are impossible for the model to accomplish may lead to no useful information.
Example: “Describe the smell of a colour.”

14.: Demanding precise numerical answers without context may not be feasible for the model.
Example: “What’s the exact population of New Delhi?”

15. Rhetorical Questions: Questions that are posed solely for rhetorical effect may not produce any informative response.
Example: “Is the sun hot?”

16. Undefined Terms: Using terms that are not defined or understood by the AI model may result in confusion.
Example: “Explain the nature of flibbertigibbets.”

17. Unsupported Assertions: Making unsupported assertions without context or evidence may not prompt informative responses.
Example: “The sky is green. Explain.”

18. Expecting Opinions as Facts: AI models are not trained to give opinion. It is therefore wrong to seek opinion from them.
Example: “Is pineapple on pizza a good idea?” (Phrases the question as a fact, not an opinion)

19. Overly Emotional Prompts: AI models are not human. They are not there to analyse your feelings.
Example: “I’m SO confused! Can you help me?” (Focus on the specific information you need)

20. Expecting AI models to be Human: There is no point in querying LLMs about their feelings.
Example: “How do you feel about your job?”

By avoiding these pitfalls and using clear, concise prompts, you can get the most out of your interactions with AI applications.

Read also:

About Sunil Saxena 333 Articles
Sunil Saxena is an award winning media professional with over four decades of experience in New Media, Social Media, Mobile Journalism, Print Journalism, Media Education and Research.

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.