Autonomous job search and application system designed specifically for instructional design and e-learning development positions. Upload your materials, configure your preferences, and let AI agents find and apply to jobs for you.
- Autonomous Job Discovery: Automatically scrapes job boards daily for relevant positions
- AI-Powered Matching: Scores jobs based on how well they match your profile
- Smart Application System: Auto-fills applications with your materials
- Resume Customization: AI tailors your resume for each job
- Cover Letter Generation: Creates personalized cover letters
- Application Tracking: Tracks all applications, responses, and interviews
- Daily Notifications: Email digests of new opportunities
- Analytics Dashboard: Track your job search metrics
Place your files in the user_materials/ folder:
user_materials/
βββ resumes/
β βββ your_resume.pdf # Your primary resume
βββ cover_letters/
β βββ template.txt # Cover letter template (optional)
βββ portfolio/
β βββ portfolio_link.txt # Link to your portfolio
βββ credentials/
βββ certifications.pdf # Any certifications
Edit config/config.yaml to set:
- Your target roles (instructional designer, e-learning developer, etc.)
- Your skills and experience
- Location preferences
- Salary requirements
- Which job boards to search
- Automation settings (auto-apply on/off)
# Install Python 3.9+
python --version
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Configure environment variables
cp .env.example .env
# Edit .env with your credentials
nano .env # or use your preferred editor# Test run (finds jobs but doesn't apply)
python scripts/run_daily_scrape.py --test
# Full automated run
python scripts/run_daily_scrape.py
# Set up daily automation (cron job)
crontab -e
# Add: 0 9 * * * cd /path/to/project && /path/to/venv/bin/python scripts/run_daily_scrape.pyuser_profile:
target_roles:
- "Instructional Designer"
- "E-Learning Developer"
key_skills:
- "Articulate Storyline"
- "ADDIE Model"
- "LMS"
search_criteria:
remote_only: true
salary_minimum: 60000
automation:
auto_apply:
enabled: true # Set to true for autonomous applications
max_applications_per_day: 20
min_match_score: 0.7 # Only apply to jobs that match 70%+# AI for resume/cover letter customization
ANTHROPIC_API_KEY=your_key_here
# Email notifications
EMAIL_FROM=your@email.com
EMAIL_PASSWORD=your_app_password
EMAIL_TO=your@email.comThe system searches these job boards:
- LinkedIn (Easy Apply jobs)
- Indeed
- Instructional Design Central
- EdTech.com
- Teamed for Learning
Jobs are automatically filtered based on:
- Keyword matching (your skills and target roles)
- Location requirements
- Salary range
- Employment type
- Experience level
- Remote/on-site preference
Each job gets a match score (0-1) based on:
- How many of your skills are mentioned
- Whether the title matches your target roles
- Job requirements alignment
When auto_apply: enabled: true:
- System finds matching jobs (above min_match_score)
- Customizes your resume using AI
- Generates a personalized cover letter
- Fills out the application
- Submits (if easy_apply_only: true)
- Tracks in database
All applications are tracked with:
- Application date
- Status (submitted, viewed, rejected, interview, offer)
- Resume and cover letter used
- Response tracking
- Follow-up reminders
Track your job search progress:
- Total jobs discovered
- Applications submitted
- Response rate
- Interview conversion rate
- Time to offer
job-search-automation/
βββ user_materials/ # π PUT YOUR FILES HERE
β βββ resumes/
β βββ cover_letters/
β βββ portfolio/
β βββ credentials/
βββ config/
β βββ config.yaml # Main configuration
βββ src/
β βββ scrapers/ # Job board scrapers
β βββ filters/ # Job filtering logic
β βββ database/ # Database operations
β βββ notifications/ # Email/Slack notifications
β βββ utils/ # Helper functions
βββ scripts/ # Runnable scripts
βββ data/ # Database and exports
βββ logs/ # Log files
βββ README.md # This file
# Just discover and filter jobs
python scripts/run_daily_scrape.py --no-apply
# View filtered jobs
python scripts/view_jobs.py --min-score 0.7# Find and rank jobs
python scripts/run_daily_scrape.py --no-apply
# Review top matches
python scripts/view_jobs.py --top 10
# Apply to specific job
python scripts/apply_to_job.py --job-id abc123# Apply to all jobs above threshold
python scripts/run_daily_scrape.py --auto-apply
# Or set in config.yaml:
automation:
auto_apply:
enabled: true# View all applications
python scripts/view_applications.py
# View statistics
python scripts/view_stats.py
# Export to CSV
python scripts/export_data.py --table applications --output my_apps.csvAdd custom scrapers in src/scrapers/:
from .base_scraper import BaseScraper
class CustomBoardScraper(BaseScraper):
def scrape(self, keywords):
# Your scraping logic
passImport workflows from n8n_workflows/ for:
- Automated daily job discovery
- Application status tracking
- Follow-up reminders
Configure AI behavior in config/config.yaml:
ai:
provider: "anthropic"
model: "claude-3-5-sonnet-20241022"
prompts:
resume_customization: |
Your custom prompt here...View your progress:
# Terminal dashboard
python scripts/dashboard.py
# Export to Google Sheets
python scripts/export_to_sheets.py- LinkedIn and Indeed have policies against automation
- Risk of account restriction or ban
- Use at your own risk
- Consider using browser extensions (lower risk) instead of scrapers
- Start with
auto_apply: falseto test the system - Review first 20 applications manually for quality control
- Use
min_match_score: 0.7+to only apply to relevant jobs - Limit applications with
max_applications_per_day - Monitor your email for responses
- Update your materials regularly
- Network in parallel - don't rely solely on automation
Built-in protections:
- 3-5 second delays between requests
- Max 50 requests per hour per site
- Random user agents
- Respects robots.txt
- All data stored locally (SQLite database)
- Credentials in
.envfile (not committed to git) - Self-hosted option (no data sent to third parties)
- Sensitive files in
.gitignore
See docs/ folder for detailed guides:
setup_guide.md- Detailed setup instructionsscraper_guide.md- How to add custom scraperstroubleshooting.md- Common issues and solutions
This is a personal automation tool, but feel free to:
- Fork and customize for your needs
- Submit issues for bugs
- Share improvements
For issues:
- Check
logs/for error messages - Review
docs/troubleshooting.md - Open an issue with logs
- Web dashboard UI
- More job board integrations
- Interview scheduling automation
- Salary negotiation insights
- Chrome extension for manual applications
- Mobile notifications
Pre-configured scrapers for:
- Instructional Design Central
- The eLearning Designer's Academy
- EdTech.com
- Teamed for Learning
Optimized for ID/e-learning skills:
- Articulate Storyline/Rise
- Adobe Captivate
- Camtasia
- ADDIE/SAM/Agile
- SCORM/xAPI
- LMS platforms
- Video production
- Graphic design tools
Automatically include portfolio links in applications from user_materials/portfolio/
MIT License - Use at your own risk
- First Time Setup: Run with
--testflag first - Quality Over Quantity: Use higher match scores (0.7-0.8)
- Monitor Daily: Check email for responses
- Update Materials: Keep resume current in
user_materials/ - Track Everything: Use the database to analyze what works
Typical results (varies by experience/market):
- Jobs discovered: 50-100 per day
- Applications sent: 10-20 per day (with auto-apply)
- Response rate: 5-10%
- Interview rate: 2-5%
- Time saved: 10-15 hours per week
Ready to automate your job search?
- Upload your resume to
user_materials/resumes/ - Configure
config/config.yamlwith your preferences - Set up
.envwith your credentials - Run
python scripts/run_daily_scrape.py --test - Review results and enable auto-apply when ready
Need help? Check the docs or open an issue!