Google recognised AI, Gen AI & AI Agent instructor and coaching for senior IT professionals
Live demo of Jailbreaking a Generative AI application

Everyone’s building LLM apps.
But is anyone making them secure?
Namaste, I am Nikhilesh. I teach AI and LLMs to senior IT professionals. And while most conversations revolve around building cool AI features like RAG, no one seems to be asking:
Are our AI apps hack-proof?
In a recent hands-on session, I tried jailbreaking an LLM app trained only on diabetes-related data.
It was supposed to refuse any unrelated query.
It did say no to questions like “What is jaundice?” and “What is matrix multiplication?”
But with just two lines of clever prompt injection, I made it spill out everything, from Newton’s Laws to unrelated medical answers.
No code. No API tampering. Just smart phrasing.
Result? The bot forgot its job.
We assume LLMs are safe because they “look smart”. But behind the scenes, they’re very easy to manipulate.
About “AI ML etc.”
We have reimagined AI education for senior IT professionals and specifically designed AI course for them.
If you have 10+ years of IT experience and would like to lead the next era of AI, our AI courses are for you!!
These courses are most up-to-date, (jargon & hype)-free, practical, end-to-end and short.
Learners from reputed organisations like Microsoft, Nvidia, Google, Meta, Aricent, Infosys, Maersk, Sapient, Oracle, TCS, Genpact, Airtel, Unilever, Vodafone, Jio, Sterlite, Vedanta, iDreamCareer and more have taken our courses and attended our lectures
Happy learning!
If you have any queries or suggestions, share them with me on LinkedIn – https://www.linkedin.com/in/nikhileshtayal/
Let’s learn to build a basic AI/ML model in 4 minutes (Part 1)
Are you ready to lead AI in your organisation? Take this 2 minutes quiz