What security headaches has AI introduced in your projects lately? (2026 edition) #193727
Replies: 3 comments
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
|
everyone — the "vibe coding" vs security reality check is so real. |
Beta Was this translation helpful? Give feedback.
-
|
That .env file story is painful and way too common. Treating AI like a junior dev who doesn't understand context is the right mental model. Most teams default to Snyk(snyk.io) for catching this stuff. Solid dependency scanning, huge database, good GitHub integration. But Snyk alone misses the pattern-based issues AI loves to generate the SQL injection, the bad auth logic, the hardcoded secrets in old commits. And at scale, the noise and pricing become real problems. What made a difference for me was pairing a dependency scanner with |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
🏷️ Discussion Type
Bug
💬 Feature/Topic Area
Code quality
Discussion Details
Yo folks,
Been grinding with AI coding tools heavy this year (Cursor, Claude, Copilot, whatever) and man... the speed is insane, but the security side is giving me serious anxiety 😩
Security was already a top pain for most teams, but AI has thrown a whole new bag of problems on top:
AI spits out code with OWASP Top 10 vulnerabilities like it's nothing (heard some reports saying ~45% of generated code has serious flaws, especially in Java). "Vibe coding" sounds cool until you realize half your codebase is now insecure as hell.
Secrets leaking left and right — AI suggesting code that hardcodes stuff or exposes creds in logs/workflows.
Data privacy nightmare: When you paste chunks of your code or sensitive data into these tools, where does it actually go? Training data? Compliance audits? In regulated industries this is becoming a total headache.
GitHub Actions side is still messy too — dependency tags getting hijacked, secrets inheritance being too loose, reusable workflows blasting creds everywhere. GitHub dropped some 2026 roadmap stuff about scoped secrets and dependency locking, but until that's fully here, we're all playing with fire.
Personally, I've caught AI-generated code introducing SQL injection risks and bad auth patterns that I almost merged. Also had to start being super strict about what I feed into the AI because of privacy concerns (GDPR, client data, etc.).
So real talk — what's the biggest security mess AI has caused in your projects this year?
Random vulnerabilities sneaking into prod?
Compliance/audit panic because "the AI wrote it"?
Shadow AI tools devs are using without telling security?
Or GitHub Actions specific drama (secrets, supply chain stuff)?
Drop your war stories, what you're doing to fix it, or tools/workflows that actually help. Let's share some practical tips instead of just complaining 😂
Would love to hear from both devs and security folks here. @github
Beta Was this translation helpful? Give feedback.
All reactions