Using code2prompt to Generate Better AI Code: A Developer’s Guide
As a developer, I’ve always been somewhat skeptical about using AI for code generation. I prefer writing code myself and typically use AI only as an assistant for documentation, boilerplate code, or clearing doubts. However, we all face those moments when project deadlines loom, and we need to accelerate our development process.
The Problem with AI-Generated Code
Let’s be honest — AI-generated code is often subpar. The main reason? AI lacks context about your codebase. Without understanding your project’s structure, dependencies, and patterns, AI models generate what they think is correct, which often misses the mark.
This is where code2prompt comes in — a brilliant tool developed by mufeedvh that transforms your entire codebase into a comprehensive prompt for Large Language Models (LLMs).
Setting Up code2prompt
Prerequisites
Before we begin, you’ll need:
- Rust programming language
- Cargo (Rust’s package manager)
Make sure you have the latest stable version of Rust:
rustup update stable
Installation
Installing code2prompt is straightforward with Cargo:
cargo install code2prompt
or you could install code2prompt from its github repo: https://github.com/mufeedvh/code2prompt
Using code2prompt
The basic syntax is simple:
code2prompt <OPTIONS> <PATH>
Key Options
-t, — template: Specify a custom Handlebars template
-f, — filter: Filter specific file extensions
-e, — exclude: Exclude certain file extensions
— exclude-files: Exclude specific files
— exclude-folders: Exclude entire folders
— tokens: View token count of the generated prompt
-o, — output: Save output to a file
-d, — diff: Include git diff
-l, — line-number: Add line numbers to source code
Common Usage Examples
- Basic usage — analyze entire project:
cd /path/to/project
code2prompt .
2. Filter specific languages:
code2prompt — filter py,js .
3. Exclude test files and dependencies:
code2prompt — exclude-folders node_modules,tests — exclude test.py .
4. Save output with line numbers:
code2prompt — line-number — output prompt.txt .
Using the Generated Prompt with AI
Not all AI models are created equal when it comes to handling large prompts. You’ll need an LLM with a substantial context window. Here are my recommended options:
1. v0 by Vercel: Currently my top choice for code generation
2. Gemini 1.5 Pro (experimental): A solid backup when v0’s daily limits are reached
Crafting the Perfect Prompt
Here’s my template for getting the best results:
Context from my codebase:
[Paste code2prompt output here]Tech stack specification:
Backend: [Your backend framework]
Frontend: [Your frontend framework]
Database: [Your database]
Task: [Describe what you need generated]
Additional requirements:
[List any specific requirements]
[Include any patterns to follow]
[Mention technologies to avoid]
Best Practices
1. Be Specific: Always specify your tech stack explicitly, even if it’s in the codebase context.
2. Set Boundaries: Clearly state what technologies or approaches to avoid.
3. Review & Refactor: Always review generated code thoroughly. AI is an assistant, not a replacement for good coding practices.
4. Incremental Usage: Start with smaller, well-defined tasks before using it for complex features.
When to Use This Approach
This tool shines in several scenarios:
- Tight deadlines requiring rapid development
- Generating boilerplate code that needs to match existing patterns
- Building features similar to existing ones in your codebase
- Refactoring code while maintaining project consistency
Conclusion
While I still advocate for writing your own code whenever possible, tools like code2prompt make AI code generation more practical and reliable. By providing comprehensive context about your codebase, you’re more likely to get generated code that actually fits your project’s needs and patterns.
Remember, AI should complement your development process, not replace critical thinking and problem-solving. Use this tool wisely, and always review and understand the code you’re integrating into your project.