Overview

The SkillMatchIQ Batch Processing API enables you to analyze multiple candidate resumes against job descriptions at scale. Our AI-powered engine provides detailed matching scores, skill gap analysis, and actionable recommendations.

🚀 Lightning Fast

Process hundreds of candidates in minutes with our optimized parallel processing engine.

📊 Rich Analytics

Get detailed insights including match scores, skill gaps, and visual knowledge graphs.

🔄 Flexible Input

Support for ZIP files, CSV, cloud storage (Google Drive, AWS S3), and databases.

💡
Pro Tip: Our API processes job descriptions once per batch, significantly reducing processing time and costs when analyzing multiple candidates.

Authentication

All API requests require authentication using your API key. Include your key in the request headers:

Authorization: Bearer YOUR_API_KEY

To get your API key:

  1. Sign up for a SkillMatchIQ account
  2. Navigate to your dashboard
  3. Click on "API Keys" in the settings menu
  4. Generate a new API key
⚠️
Security: Keep your API key secure and never expose it in client-side code or public repositories.

Quick Start

Get started with our API in just a few minutes. Here's a simple example to analyze resumes from a ZIP file:

import requests
import time

# Your API configuration
API_KEY = "your_api_key_here"
API_BASE_URL = "https://yourdomain.com"  # Your actual domain

# Prepare your files
with open('resumes.zip', 'rb') as f:
    files = {'candidate_source_file': ('resumes.zip', f, 'application/zip')}
    
    data = {
        'input_type': 'ZIP_FILE',  # Correct enum value
        'job_description_text': 'Looking for a Senior Software Engineer with Python and React experience...'
    }
    
    headers = {'Authorization': f'Bearer {API_KEY}'}
    
    # Submit the batch job
    response = requests.post(
        f'{API_BASE_URL}/api/v1/batch-jobs',  # Correct endpoint
        data=data,
        files=files,
        headers=headers
    )
    
    batch_job_id = response.json()['batch_job_id']  # Correct field name
    print(f"Batch submitted: {batch_job_id}")

# Check status
while True:
    status = requests.get(
        f'{API_BASE_URL}/api/v1/batch-jobs/{batch_job_id}/status',  # Correct path
        headers=headers
    ).json()
    
    processed = status['processed_candidates']
    total = status['total_candidates']
    print(f"Progress: {processed}/{total} candidates")
    
    if status['status'] == 'completed':
        break
    elif status['status'] == 'failed':
        print(f"Batch failed: {status.get('error_message')}")
        break
    
    time.sleep(5)

# Get results
results = requests.get(
    f'{API_BASE_URL}/api/v1/batch-jobs/{batch_job_id}/results?skip=0&limit=10',  # Correct pagination
    headers=headers
).json()

# Show top matches
for item in results['processed_items'][:5]:  # Correct field name
    score = item['match_score']['overall_score']
    filename = item['original_filename']  # Correct field name
    print(f"Top match: {filename} - {score}%")
const FormData = require('form-data');
const fs = require('fs');
const axios = require('axios');

const API_KEY = 'your_api_key_here';
const API_BASE_URL = 'https://api.skillmatchiq.com';

async function analyzeCandidates() {
    // Prepare form data
    const form = new FormData();
    form.append('input_type', 'zip_file');
    form.append('job_text', 'Looking for a Senior Software Engineer...');
    form.append('candidate_file', fs.createReadStream('resumes.zip'));
    
    // Submit batch job
    const submitResponse = await axios.post(
        `${API_BASE_URL}/api/batch/submit`,
        form,
        {
            headers: {
                ...form.getHeaders(),
                'Authorization': `Bearer ${API_KEY}`
            }
        }
    );
    
    const batchId = submitResponse.data.batch_id;
    console.log(`Batch submitted: ${batchId}`);
    
    // Poll for completion
    let status;
    do {
        await new Promise(resolve => setTimeout(resolve, 5000));
        
        const statusResponse = await axios.get(
            `${API_BASE_URL}/api/batch/${batchId}/status`,
            {
                headers: {
                    'Authorization': `Bearer ${API_KEY}`
                }
            }
        );
        
        status = statusResponse.data;
        console.log(`Progress: ${status.progress_percentage}%`);
    } while (status.status !== 'completed');
    
    // Get results
    const resultsResponse = await axios.get(
        `${API_BASE_URL}/api/batch/${batchId}/results`,
        {
            headers: {
                'Authorization': `Bearer ${API_KEY}`
            }
        }
    );
    
    const results = resultsResponse.data;
    console.log(`Top match: ${results.results[0].filename} - ${results.results[0].match_score.overall_score}%`);
}

analyzeCandidates();
# Submit batch job
curl -X POST https://yourdomain.com/api/v1/batch-jobs \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "input_type=ZIP_FILE" \
  -F "job_description_text=Looking for a Senior Software Engineer with Python and React experience..." \
  -F "[email protected]"

# Response: {"batch_job_id": "abc123-def456-ghi789", "status": "pending"}

# Check status
curl https://yourdomain.com/api/v1/batch-jobs/abc123-def456-ghi789/status \
  -H "Authorization: Bearer YOUR_API_KEY"

# Get results
curl https://yourdomain.com/api/v1/batch-jobs/abc123-def456-ghi789/results \
  -H "Authorization: Bearer YOUR_API_KEY"

Batch Processing

Our batch processing system is designed to handle large volumes of candidates efficiently. There are two processing modes:

Asynchronous Processing (Recommended)

Best for large batches. Submit your job and check back for results.

Submit Job

Upload your candidates and job description. Receive a batch ID immediately.

Monitor Progress

Check status anytime using your batch ID. Get real-time progress updates.

Retrieve Results

Once complete, fetch detailed results including scores and recommendations.

Synchronous Processing

For smaller batches (under 100 candidates), get results immediately in a single API call.

POST /api/batch/sync
{
    "input_type": "csv_file",
    "job_text": "Your job description here...",
    "max_candidates": 50,
    "timeout": 300
}

Input Types

SkillMatchIQ supports multiple input sources to fit your workflow:

Input Type Description Best For
zip_file ZIP archive containing resume files (PDF, DOCX, TXT) Bulk uploads from ATS exports
csv_file CSV with candidate data and resume text Structured data from databases
google_drive Google Drive folder containing resumes Teams using Google Workspace
aws_s3 AWS S3 bucket with resume files Enterprise cloud storage
database Direct database query (PostgreSQL, MySQL, SQLite) Existing HR systems

CSV File Format

When using CSV files, ensure your data follows this format:

candidate_id,name,resume_text,filename
1,John Doe,"Software Engineer with 5 years Python experience...",john_doe.pdf
2,Jane Smith,"Data Scientist specializing in machine learning...",jane_smith.docx

API Endpoints

Complete reference for all available API endpoints:

POST /api/v1/batch-jobs

Submit a new batch job for asynchronous processing.

Request Parameters

Parameter Type Required Description
input_type enum Yes ZIP_FILE, CSV_FILE, GOOGLE_DRIVE, AWS_S3, DATABASE
job_description_text string Yes Job description text
candidate_source_file file Conditional Required for ZIP_FILE and CSV_FILE types
source_config_json JSON string Conditional Configuration for cloud/database sources
metadata_json JSON string No Additional metadata for the batch

Response (HTTP 202 Accepted)

{
    "batch_job_id": "abc123-def456-ghi789",
    "user_id": 123,
    "status": "pending",
    "total_candidates": 0,
    "processed_candidates": 0,
    "created_at": "2024-01-15T10:30:00Z",
    "updated_at": "2024-01-15T10:30:00Z"
}
GET /api/v1/batch-jobs/{batch_job_id}/status

Check the current status and progress of a batch job.

Response

{
    "batch_job_id": "abc123-def456-ghi789",
    "user_id": 123,
    "status": "processing",
    "total_candidates": 150,
    "processed_candidates": 75,
    "created_at": "2024-01-15T10:30:00Z",
    "updated_at": "2024-01-15T10:30:45Z",
    "completed_at": null,
    "error_message": null
}

Status Values

  • pending - Job received and waiting to start
  • processing - Currently analyzing candidates
  • completed - Successfully finished
  • failed - Error occurred
GET /api/v1/batch-jobs/{batch_job_id}/results

Retrieve the results of a completed batch job.

Query Parameters

Parameter Type Default Description
limit integer 100 Maximum results to return
skip integer 0 Number of results to skip

Response

{
  "batch_job_id": "batch_abc123def456",
  "user_id": 42,
  "status": "completed",
  "job_title": "Senior Full Stack Developer",
  "job_summary": "Requires: React, Node.js, AWS, Docker, TypeScript",
  "total_candidates": 25,
  "processed_candidates": 25,
  "created_at": "2024-01-15T10:30:00Z",
  "updated_at": "2024-01-15T10:45:00Z",
  "completed_at": "2024-01-15T10:45:00Z",
  "error_message": null,
  "summary_stats": {
    "total_candidates": 25,
    "successful_matches": 24,
    "error_count": 1,
    "average_score": 73.2,
    "highest_score": 94.8,
    "lowest_score": 32.1,
    "candidates_above_70": 15,
    "candidates_above_80": 8,
    "candidates_above_90": 2
  },
  "processed_items": [
    {
      "candidate_id_internal": "candidate_001",
      "original_filename": "john_doe_resume.pdf",
      "match_score": {
        "overall_score": 94.8,
        "skill_match_percentage": 92.3,
        "core_skill_match_percentage": 96.7,
        "role_match_percentage": 100.0,
        "qualification_match_percentage": 85.0,
        "sector_match_percentage": 90.0
      },
      "match_results_summary": {
        "skills": {
          "matches": [
            {
              "name": "React",
              "match_type": "exact",
              "match_score": 100.0,
              "proficiency_match": true,
              "duration_match": true,
              "is_core": true
            }
          ],
          "gaps": [
            {
              "name": "AWS Lambda",
              "is_core": true
            }
          ],
          "surpluses": [
            {
              "name": "Docker",
              "is_core": false
            }
          ]
        },
        "roles": {
          "matches": [{"name": "Full Stack Developer", "type": "match"}],
          "gaps": [],
          "surpluses": [{"name": "DevOps Engineer", "type": "surplus"}]
        },
        "qualifications": {
          "matches": [{"name": "Bachelor's Degree", "type": "match"}],
          "gaps": [{"name": "AWS Certification", "type": "gap"}],
          "surpluses": []
        },
        "sectors": {
          "matches": [{"name": "Technology", "type": "match"}],
          "gaps": [],
          "surpluses": []
        }
      },
      "error": null
    }
  ]
}

Analytics & Insights

Track your recruitment metrics and identify trends over time with our analytics API.

GET /api/analytics/meta

Retrieve processed analytics data for your dashboard.

Query Parameters

Parameter Type Default Description
time_range enum MONTH WEEK (7), MONTH (30), QUARTER (90), YEAR (365)
job_filter string null Filter by specific job ID

Real-World Examples

See how different organizations use our API:

Example 1: Recruitment Agency

Process weekly candidate submissions from multiple clients:

// Weekly batch processing
const processWeeklyCandidates = async () => {
    // Upload ZIP file with all resumes
    const form = new FormData();
    form.append('input_type', 'zip_file');
    form.append('job_text', await getJobDescription('tech_role_2024'));
    form.append('candidate_file', weeklyResumesZip);
    form.append('job_id', 'tech_role_2024'); // For analytics
    
    const response = await submitBatch(form);
    
    // Email top 10 matches to client
    const results = await getResults(response.batch_id);
    await emailTopMatches(results.slice(0, 10));
};

Example 2: Enterprise HR System

Integrate with existing applicant tracking system:

-- Database configuration
{
    "type": "postgresql",
    "connection_string": "postgresql://hr_user:pass@localhost:5432/ats",
    "query": "SELECT id, name, resume_content FROM applicants WHERE position_id = 12345"
}

Example 3: University Career Center

Match students with internship opportunities:

# Process student resumes from Google Drive
config = {
    "folder_id": "student_resumes_2024_folder_id"
}

# Match against multiple internship positions
for position in internship_positions:
    response = api.submit_batch(
        input_type="google_drive",
        job_text=position['description'],
        job_id=position['id'],
        config=config
    )
    
    # Store results for student advisors
    save_matching_results(position['id'], response['batch_id'])

Error Codes

Understanding and handling API errors:

Code Description Solution
400 Bad Request - Invalid parameters Check required fields and data formats
401 Unauthorized - Invalid API key Verify your API key is correct
404 Not Found - Batch ID doesn't exist Check the batch_id is correct
413 Payload Too Large Reduce file size or split into smaller batches
429 Too Many Requests Implement rate limiting, wait before retrying
500 Internal Server Error Contact support if error persists

Error Response Format

{
    "error": {
        "code": "INVALID_INPUT",
        "message": "The 'input_type' field is required",
        "details": {
            "field": "input_type",
            "provided": null,
            "expected": ["zip_file", "csv_file", "google_drive", "aws_s3", "database"]
        }
    }
}

Best Practices

🚀 Performance Optimization

Use Job IDs

Provide consistent job_id values for similar positions to leverage our caching system.

Batch Sizing

Keep batches under 500 candidates for optimal processing time.

Async for Large Jobs

Use asynchronous processing for batches over 50 candidates.

📊 Data Quality

  • Ensure resume text is clean and properly formatted
  • Provide detailed job descriptions for better matching
  • Include both required and preferred skills in job descriptions
  • Use consistent naming for skills across job postings

🔒 Security

🔐
  • Store API keys in environment variables, never in code
  • Use HTTPS for all API communications
  • Implement proper access controls for cloud storage
  • Regularly rotate API keys
  • Monitor API usage for unusual patterns

📈 Monitoring

Track these key metrics for optimal performance:

  • Average processing time per candidate
  • Match score distribution
  • Common skill gaps across positions
  • API error rates and types
  • Batch completion rates

Next Steps