II.
Page JSON
Structured · livepage:docs-user-guide-tutorials-intermediate-custom-process
Intermediate Tutorial: Custom Process Definition json
Inspect the normalized record payload exactly as the atlas UI reads it.
{
"id": "page:docs-user-guide-tutorials-intermediate-custom-process",
"_kind": "Page",
"_file": "wiki/docs/user-guide/tutorials/intermediate-custom-process.md",
"_cluster": "wiki",
"attributes": {
"nodeKind": "Page",
"sourcePath": "docs/user-guide/tutorials/intermediate-custom-process.md",
"sourceKind": "repo-docs",
"title": "Intermediate Tutorial: Custom Process Definition",
"displayName": "Intermediate Tutorial: Custom Process Definition",
"slug": "docs/user-guide/tutorials/intermediate-custom-process",
"articlePath": "wiki/docs/user-guide/tutorials/intermediate-custom-process.md",
"article": "\n# Intermediate Tutorial: Custom Process Definition\n\n**Version:** 1.0\n**Date:** 2026-01-25\n**Category:** Tutorial\n**Level:** Intermediate\n**Estimated Time:** 60-90 minutes\n**Primary Personas:** Sarah Chen (Productivity-Focused Developer), Elena Rodriguez (DevOps Engineer)\n\n---\n\n## Learning Objectives\n\nBy the end of this tutorial, you will be able to:\n\n1. **Create a custom process definition** that orchestrates multiple tasks\n2. **Implement parallel execution** for independent tasks to improve performance\n3. **Add strategic breakpoints** for human approval at critical decision points\n4. **Configure quality checks** that run as part of your workflow\n5. **Parameterize your process** to make it reusable across different scenarios\n\n---\n\n## Prerequisites\n\nBefore starting this tutorial, please ensure you have:\n\n- [ ] Completed the **[Beginner Tutorial: Build a Simple REST API](./beginner-rest-api.md)**\n- [ ] Understanding of **JavaScript async/await** patterns\n- [ ] Familiarity with **Jest** or similar testing frameworks\n- [ ] **Babysitter CLI** installed globally (`npm install -g @a5c-ai/babysitter@latest`)\n- [ ] A project where you want to implement a custom build and deploy workflow\n\n### Verify Prerequisites\n\n```bash\n# Verify CLI installation\nbabysitter --version\n```\n\n### About Breakpoints\n\n**Interactive Mode (Claude Code)**: When running in Claude Code, breakpoints are handled directly in the chat - no service needed!\n\n**Non-Interactive Mode**: For CI/CD or headless automation, breakpoints are disabled:\n\n---\n\n## What We're Building\n\nIn this tutorial, we will create a **custom build and deploy process** that:\n\n1. **Lints the code** in parallel with **running unit tests**\n2. **Builds the application** after lint and tests pass\n3. **Runs security scan** on the build output\n4. **Waits for human approval** before deploying\n5. **Deploys to staging** environment\n6. **Runs integration tests** against staging\n7. **Waits for final approval** before production deployment\n8. **Deploys to production** (simulated)\n\nThis process demonstrates several powerful Babysitter features:\n\n- **Parallel execution** for independent tasks\n- **Sequential execution** for dependent tasks\n- **Multiple breakpoints** for staged approvals\n- **Quality gates** with pass/fail criteria\n- **Parameterized inputs** for flexibility\n\n### Process Flow Diagram\n\n```\n +------------------+\n | Process Start |\n +--------+---------+\n |\n +--------------+--------------+\n | |\n +---------v---------+ +---------v---------+\n | Run Lint | | Run Unit Tests |\n | (parallel) | | (parallel) |\n +---------+---------+ +---------+---------+\n | |\n +--------------+--------------+\n |\n +--------v---------+\n | Build App |\n +--------+---------+\n |\n +--------v---------+\n | Security Scan |\n +--------+---------+\n |\n +--------v---------+\n | BREAKPOINT: |\n | Staging Approval|\n +--------+---------+\n |\n +--------v---------+\n | Deploy Staging |\n +--------+---------+\n |\n +--------v---------+\n | Integration Test|\n +--------+---------+\n |\n +--------v---------+\n | BREAKPOINT: |\n | Prod Approval |\n +--------+---------+\n |\n +--------v---------+\n | Deploy Prod |\n +--------+---------+\n |\n +--------v---------+\n | Process Complete|\n +------------------+\n```\n\n---\n\n## Step 1: Understand Process Definition Structure\n\nBefore we write code, let's understand how Babysitter process definitions work.\n\nA **process definition** is a JavaScript function that orchestrates tasks using a context object (`ctx`). The context provides methods for:\n\n| Method | Purpose | Example |\n|--------|---------|---------|\n| `ctx.task(taskDef, args)` | Execute a single task | Run tests, build code |\n| `ctx.breakpoint(opts)` | Wait for human approval | Deployment gates |\n| `ctx.parallel.all([...])` | Run tasks concurrently | Lint and test together |\n| `ctx.log(message)` | Log to the journal | Progress updates |\n| `ctx.getState(key)` | Read process state | Retrieved cached values |\n| `ctx.setState(key, value)` | Write process state | Store intermediate results |\n\n### Basic Structure\n\n```javascript\n// process-definition.js\nexport async function process(inputs, ctx) {\n // inputs: Parameters passed when starting the run\n // ctx: Context object with orchestration methods\n\n // Your workflow logic here\n const result = await ctx.task(someTask, { arg: 'value' });\n\n return { success: true, result };\n}\n```\n\n**Checkpoint 1:** You understand the basic structure of a process definition.\n\n---\n\n## Step 2: Create Your Project Structure\n\nLet's set up the project structure for our custom process.\n\n```bash\n# Create a new directory for this tutorial\nmkdir custom-process-tutorial\ncd custom-process-tutorial\n\n# Initialize the project\nnpm init -y\n\n# Create the directory structure\nmkdir -p .a5c/processes\nmkdir -p src\nmkdir -p tests\nmkdir -p scripts\n```\n\nYour directory should now look like:\n\n```\ncustom-process-tutorial/\n .a5c/\n processes/ # Where process definitions live\n src/ # Application source code\n tests/ # Test files\n scripts/ # Build and deploy scripts\n package.json\n```\n\n---\n\n## Step 3: Define Your Task Definitions\n\nBefore writing the main process, we need to define the individual tasks. Create a file for task definitions:\n\n```bash\ntouch .a5c/processes/tasks.js\n```\n\nNow let's define our tasks. Open `.a5c/processes/tasks.js`:\n\n```javascript\n// .a5c/processes/tasks.js\n// Task definitions for our build and deploy process\n\n/**\n * Lint Task\n * Runs ESLint on the codebase\n */\nexport const lintTask = {\n type: 'shell',\n name: 'lint',\n description: 'Run ESLint on source code',\n command: 'npm run lint',\n timeout: 60000, // 1 minute\n retries: 0,\n};\n\n/**\n * Unit Test Task\n * Runs Jest unit tests with coverage\n */\nexport const unitTestTask = {\n type: 'shell',\n name: 'unit-tests',\n description: 'Run Jest unit tests with coverage',\n command: 'npm test -- --coverage --passWithNoTests',\n timeout: 300000, // 5 minutes\n retries: 1,\n};\n\n/**\n * Build Task\n * Compiles/bundles the application\n */\nexport const buildTask = {\n type: 'shell',\n name: 'build',\n description: 'Build the application',\n command: 'npm run build',\n timeout: 180000, // 3 minutes\n retries: 0,\n};\n\n/**\n * Security Scan Task\n * Runs npm audit and checks for vulnerabilities\n */\nexport const securityScanTask = {\n type: 'shell',\n name: 'security-scan',\n description: 'Run security vulnerability scan',\n command: 'npm audit --audit-level=high',\n timeout: 120000, // 2 minutes\n retries: 0,\n allowFailure: true, // Continue even if vulnerabilities found\n};\n\n/**\n * Deploy Staging Task\n * Deploys to staging environment (simulated)\n */\nexport const deployStagingTask = {\n type: 'shell',\n name: 'deploy-staging',\n description: 'Deploy to staging environment',\n command: 'echo \"Deploying to staging...\" && sleep 2 && echo \"Staging deployment complete\"',\n timeout: 300000, // 5 minutes\n retries: 1,\n};\n\n/**\n * Integration Test Task\n * Runs integration tests against staging\n */\nexport const integrationTestTask = {\n type: 'shell',\n name: 'integration-tests',\n description: 'Run integration tests against staging',\n command: 'npm run test:integration || echo \"No integration tests configured\"',\n timeout: 600000, // 10 minutes\n retries: 1,\n};\n\n/**\n * Deploy Production Task\n * Deploys to production environment (simulated)\n */\nexport const deployProductionTask = {\n type: 'shell',\n name: 'deploy-production',\n description: 'Deploy to production environment',\n command: 'echo \"Deploying to production...\" && sleep 3 && echo \"Production deployment complete\"',\n timeout: 600000, // 10 minutes\n retries: 0,\n};\n\n/**\n * Agent Task: Quality Assessment\n * Uses an LLM to assess overall build quality\n */\nexport const qualityAssessmentTask = {\n type: 'agent',\n name: 'quality-assessment',\n description: 'AI assessment of build quality',\n prompt: `\n Analyze the following build results and provide a quality score (0-100):\n\n Lint Results: {{lintResult}}\n Test Results: {{testResult}}\n Security Scan: {{securityResult}}\n\n Consider:\n - Code quality (lint errors/warnings)\n - Test coverage and passing rate\n - Security vulnerabilities\n\n Respond with a JSON object: { \"score\": number, \"issues\": string[], \"recommendations\": string[] }\n `,\n timeout: 60000,\n};\n```\n\n**Key Points:**\n\n- **Shell tasks** execute command-line commands\n- **Agent tasks** use LLM capabilities for analysis\n- Each task has a **timeout** to prevent hanging\n- Some tasks allow **retries** for transient failures\n- The `allowFailure` flag lets a task fail without stopping the process\n\n**Checkpoint 2:** You have defined all task definitions for the process.\n\n---\n\n## Step 4: Write the Main Process Definition\n\nNow let's create the main process definition that orchestrates all these tasks.\n\nCreate the process file:\n\n```bash\ntouch .a5c/processes/build-deploy.js\n```\n\nOpen `.a5c/processes/build-deploy.js` and add:\n\n```javascript\n// .a5c/processes/build-deploy.js\n// Custom Build and Deploy Process Definition\n\nimport {\n lintTask,\n unitTestTask,\n buildTask,\n securityScanTask,\n deployStagingTask,\n integrationTestTask,\n deployProductionTask,\n qualityAssessmentTask,\n} from './tasks.js';\n\n/**\n * Build and Deploy Process\n *\n * This process orchestrates a complete build and deployment pipeline with:\n * - Parallel lint and test execution\n * - Security scanning\n * - Staged deployments with human approval gates\n *\n * @param {Object} inputs - Process inputs\n * @param {string} inputs.environment - Target environment ('staging' or 'production')\n * @param {number} inputs.qualityThreshold - Minimum quality score (default: 80)\n * @param {boolean} inputs.skipStaging - Skip staging deployment (default: false)\n * @param {Object} ctx - Babysitter context object\n */\nexport async function process(inputs, ctx) {\n const {\n environment = 'production',\n qualityThreshold = 80,\n skipStaging = false,\n } = inputs;\n\n ctx.log(`Starting build-deploy process for ${environment}`);\n ctx.log(`Quality threshold: ${qualityThreshold}`);\n\n // ============================================\n // PHASE 1: Quality Checks (Parallel Execution)\n // ============================================\n ctx.log('Phase 1: Running quality checks in parallel...');\n\n // Run lint and unit tests in parallel for faster execution\n const [lintResult, testResult] = await ctx.parallel.all([\n () => ctx.task(lintTask, {}),\n () => ctx.task(unitTestTask, {}),\n ]);\n\n ctx.log(`Lint completed: ${lintResult.exitCode === 0 ? 'PASS' : 'FAIL'}`);\n ctx.log(`Tests completed: ${testResult.exitCode === 0 ? 'PASS' : 'FAIL'}`);\n\n // Store results in process state for later use\n ctx.setState('lintResult', lintResult);\n ctx.setState('testResult', testResult);\n\n // Fail fast if critical checks fail\n if (lintResult.exitCode !== 0 || testResult.exitCode !== 0) {\n ctx.log('Quality checks failed. Stopping process.');\n return {\n success: false,\n phase: 'quality-checks',\n error: 'Lint or tests failed',\n lintResult,\n testResult,\n };\n }\n\n // ============================================\n // PHASE 2: Build Application\n // ============================================\n ctx.log('Phase 2: Building application...');\n\n const buildResult = await ctx.task(buildTask, {});\n\n if (buildResult.exitCode !== 0) {\n ctx.log('Build failed. Stopping process.');\n return {\n success: false,\n phase: 'build',\n error: 'Build failed',\n buildResult,\n };\n }\n\n ctx.log('Build completed successfully');\n\n // ============================================\n // PHASE 3: Security Scan\n // ============================================\n ctx.log('Phase 3: Running security scan...');\n\n const securityResult = await ctx.task(securityScanTask, {});\n\n ctx.setState('securityResult', securityResult);\n\n // Log security findings but don't necessarily fail\n if (securityResult.exitCode !== 0) {\n ctx.log('Security scan found potential issues. Review before deployment.');\n } else {\n ctx.log('Security scan passed');\n }\n\n // ============================================\n // PHASE 4: Quality Assessment\n // ============================================\n ctx.log('Phase 4: Performing AI quality assessment...');\n\n const qualityResult = await ctx.task(qualityAssessmentTask, {\n lintResult: JSON.stringify(lintResult),\n testResult: JSON.stringify(testResult),\n securityResult: JSON.stringify(securityResult),\n });\n\n const qualityScore = qualityResult.score || 0;\n ctx.log(`Quality score: ${qualityScore}/100 (threshold: ${qualityThreshold})`);\n\n if (qualityScore < qualityThreshold) {\n ctx.log(`Quality score ${qualityScore} is below threshold ${qualityThreshold}`);\n\n // Breakpoint: Allow human to override low quality score\n await ctx.breakpoint({\n question: `Quality score (${qualityScore}) is below threshold (${qualityThreshold}). Continue anyway?`,\n title: 'Quality Threshold Warning',\n context: {\n qualityScore,\n qualityThreshold,\n issues: qualityResult.issues || [],\n recommendations: qualityResult.recommendations || [],\n },\n severity: 'warning',\n });\n\n ctx.log('Human approved continuation despite low quality score');\n }\n\n // ============================================\n // PHASE 5: Staging Deployment (Optional)\n // ============================================\n if (!skipStaging) {\n ctx.log('Phase 5: Deploying to staging...');\n\n // Breakpoint: Staging deployment approval\n await ctx.breakpoint({\n question: 'Approve deployment to staging environment?',\n title: 'Staging Deployment Approval',\n context: {\n buildResult: 'SUCCESS',\n qualityScore,\n securityStatus: securityResult.exitCode === 0 ? 'PASS' : 'REVIEW_NEEDED',\n targetEnvironment: 'staging',\n },\n });\n\n const stagingResult = await ctx.task(deployStagingTask, {});\n\n if (stagingResult.exitCode !== 0) {\n ctx.log('Staging deployment failed');\n return {\n success: false,\n phase: 'staging-deploy',\n error: 'Staging deployment failed',\n stagingResult,\n };\n }\n\n ctx.log('Staging deployment successful');\n\n // ============================================\n // PHASE 6: Integration Tests\n // ============================================\n ctx.log('Phase 6: Running integration tests against staging...');\n\n const integrationResult = await ctx.task(integrationTestTask, {});\n\n ctx.setState('integrationResult', integrationResult);\n\n if (integrationResult.exitCode !== 0) {\n ctx.log('Integration tests failed');\n\n await ctx.breakpoint({\n question: 'Integration tests failed. Continue to production anyway?',\n title: 'Integration Test Failure',\n context: {\n integrationResult,\n recommendation: 'Investigate failures before proceeding',\n },\n severity: 'error',\n });\n }\n\n ctx.log('Integration tests completed');\n } else {\n ctx.log('Phase 5-6: Skipping staging deployment (skipStaging=true)');\n }\n\n // ============================================\n // PHASE 7: Production Deployment\n // ============================================\n if (environment === 'production') {\n ctx.log('Phase 7: Preparing for production deployment...');\n\n // Final breakpoint before production\n await ctx.breakpoint({\n question: 'Approve deployment to PRODUCTION environment? This is the final step.',\n title: 'PRODUCTION Deployment Approval',\n context: {\n qualityScore,\n stagingDeployed: !skipStaging,\n integrationTestsPassed: ctx.getState('integrationResult')?.exitCode === 0,\n timestamp: new Date().toISOString(),\n warning: 'This will affect live users',\n },\n severity: 'critical',\n });\n\n ctx.log('Production deployment approved');\n\n const prodResult = await ctx.task(deployProductionTask, {});\n\n if (prodResult.exitCode !== 0) {\n ctx.log('Production deployment failed');\n return {\n success: false,\n phase: 'production-deploy',\n error: 'Production deployment failed',\n prodResult,\n };\n }\n\n ctx.log('Production deployment successful!');\n }\n\n // ============================================\n // COMPLETE: Return Summary\n // ============================================\n return {\n success: true,\n summary: {\n environment,\n qualityScore,\n phases: {\n lint: 'PASS',\n tests: 'PASS',\n build: 'PASS',\n security: securityResult.exitCode === 0 ? 'PASS' : 'WARNINGS',\n staging: skipStaging ? 'SKIPPED' : 'PASS',\n integration: skipStaging ? 'SKIPPED' : 'PASS',\n production: environment === 'production' ? 'PASS' : 'SKIPPED',\n },\n completedAt: new Date().toISOString(),\n },\n };\n}\n```\n\n**Key Concepts Demonstrated:**\n\n1. **Parallel Execution (`ctx.parallel.all`)**: Lint and tests run simultaneously\n2. **Sequential Dependencies**: Build waits for lint and tests to complete\n3. **State Management (`ctx.setState/getState`)**: Store intermediate results\n4. **Multiple Breakpoints**: Staged approvals at critical points\n5. **Parameterized Inputs**: Environment, quality threshold, skip options\n6. **Conditional Logic**: Different paths based on parameters\n7. **Error Handling**: Fail fast with meaningful error messages\n\n**Checkpoint 3:** You have written the complete process definition.\n\n---\n\n## Step 5: Set Up Supporting Scripts\n\nFor our process to work, we need some npm scripts. Update your `package.json`:\n\n```json\n{\n \"name\": \"custom-process-tutorial\",\n \"version\": \"1.0.0\",\n \"type\": \"module\",\n \"scripts\": {\n \"lint\": \"echo 'Running lint...' && exit 0\",\n \"test\": \"echo 'Running tests...' && exit 0\",\n \"test:integration\": \"echo 'Running integration tests...' && exit 0\",\n \"build\": \"echo 'Building application...' && mkdir -p dist && echo 'Build complete' > dist/build.txt && exit 0\"\n },\n \"devDependencies\": {}\n}\n```\n\n> **Note:** These are placeholder scripts. In a real project, you would have actual lint, test, and build commands.\n\n---\n\n## Step 6: Register Your Custom Process\n\nNow we need to tell Babysitter about our custom process. Create a process manifest:\n\n```bash\ntouch .a5c/processes/manifest.json\n```\n\nAdd the following content:\n\n```json\n{\n \"processes\": [\n {\n \"name\": \"build-deploy\",\n \"description\": \"Custom build and deployment pipeline with staged approvals\",\n \"file\": \"./build-deploy.js\",\n \"inputs\": {\n \"environment\": {\n \"type\": \"string\",\n \"default\": \"staging\",\n \"description\": \"Target environment (staging or production)\"\n },\n \"qualityThreshold\": {\n \"type\": \"number\",\n \"default\": 80,\n \"description\": \"Minimum quality score required (0-100)\"\n },\n \"skipStaging\": {\n \"type\": \"boolean\",\n \"default\": false,\n \"description\": \"Skip staging deployment\"\n }\n }\n }\n ]\n}\n```\n\n**Checkpoint 4:** Your custom process is registered and ready to use.\n\n---\n\n## Step 7: Run Your Custom Process\n\nNow let's run our custom process using Claude Code and Babysitter.\n\nStart Claude Code in your project directory:\n\n```bash\nclaude\n```\n\nThen run the process:\n\n```\n/babysitter:call run build-deploy with environment=staging and qualityThreshold=75\n```\n\nOr using natural language:\n\n```\nUse the babysitter skill to run my custom build-deploy process.\nDeploy to staging environment with a quality threshold of 75.\n```\n\n**What you should see:**\n\n```\nCreating new babysitter run: build-deploy-20260125-160000\n\nProcess: build-deploy (custom)\nInputs:\n - environment: staging\n - qualityThreshold: 75\n - skipStaging: false\n\nRun ID: 01KGHTYK2MP9Q8BN5YM4XRZ3WD\nRun Directory: .a5c/runs/01KGHTYK2MP9Q8BN5YM4XRZ3WD/\n\nStarting process...\n\n[Phase 1] Running quality checks in parallel...\n- Lint: Starting...\n- Unit Tests: Starting...\n- Lint: PASS (0.8s)\n- Unit Tests: PASS (1.2s)\n\n[Phase 2] Building application...\n- Build: PASS (0.5s)\n\n[Phase 3] Running security scan...\n- Security: PASS (0.3s)\n\n[Phase 4] Performing AI quality assessment...\n- Quality Score: 92/100\n\n[Phase 5] Deploying to staging...\nWaiting for breakpoint approval...\n\nBreakpoint: Staging Deployment Approval\nVisit http://localhost:3184 to approve or reject.\n```\n\n---\n\n## Step 8: Approve Breakpoints\n\n### Interactive Mode (Claude Code)\n\nSince you're running in Claude Code, Claude will ask you directly:\n\n```\nClaude: The build is complete and ready for staging deployment.\n\n Summary:\n - Build Result: SUCCESS\n - Quality Score: 92\n - Security Status: PASS\n - Target Environment: staging\n\n Approve deployment to staging environment?\n\n [Approve] [Reject] [Add Comment]\n\nYou: [Click Approve or type \"yes\"]\n```\n\nSimply respond to approve and continue!\n\n### Non-Interactive Mode (Alternative)\n\nIf using non-interactive mode, open `http://localhost:3184` in your browser. You will see the staging deployment breakpoint waiting for approval.\n\nClick **\"Approve\"** to continue the process.\n\n**What you should see after approval:**\n\n```\nBreakpoint approved\n\n[Phase 5 continued] Staging deployment in progress...\n- Deploy Staging: PASS (2.1s)\n- Staging deployment successful\n\n[Phase 6] Running integration tests against staging...\n- Integration Tests: PASS (0.5s)\n\nProcess completed successfully!\n\nSummary:\n - Environment: staging\n - Quality Score: 92/100\n - All phases: PASS\n - Completed at: 2026-01-25T16:05:23.456Z\n```\n\n**Checkpoint 5:** You have successfully run your custom process with breakpoint approval.\n\n---\n\n## Step 9: Run for Production Deployment\n\nNow let's test the full process with production deployment:\n\n```\n/babysitter:call run build-deploy with environment=production qualityThreshold=85\n```\n\nThis time, you will see **multiple breakpoints**:\n\n1. **Staging Deployment Approval** - Approve staging\n2. **PRODUCTION Deployment Approval** - Final approval before production\n\nEach breakpoint will appear in the breakpoints UI with appropriate context and severity.\n\n> **Note:** Production deployment breakpoints are marked with `severity: critical` and include additional warnings.\n\n---\n\n## Step 10: Understanding Parallel Execution Benefits\n\nLet's examine the performance benefit of parallel execution. In our process, lint and unit tests run in parallel:\n\n**Without Parallel Execution (Sequential):**\n```\nLint: [████████] 10s\nTests: [████████████████] 20s\nTotal: 30 seconds\n```\n\n**With Parallel Execution:**\n```\nLint: [████████] 10s\nTests: [████████████████] 20s\n ↑ Both run simultaneously\nTotal: 20 seconds (33% faster!)\n```\n\nThe `ctx.parallel.all()` method runs independent tasks concurrently, significantly reducing total execution time.\n\n### When to Use Parallel Execution\n\n| Scenario | Use Parallel? | Reason |\n|----------|--------------|--------|\n| Lint + Tests | Yes | Independent, no shared state |\n| Build → Deploy | No | Deploy depends on build output |\n| Multiple security scans | Yes | Independent checks |\n| Database migration → Seed | No | Seed depends on migration |\n\n---\n\n## Step 11: Examine the Journal\n\nLet's look at how parallel execution appears in the journal:\n\n```bash\ncat .a5c/runs/01KGHTYK2MP9Q8BN5YM4XRZ3WD/journal/journal.jsonl | grep -E \"TASK_STARTED|TASK_COMPLETED\"\n```\n\n**What you should see:**\n\n```json\n{\"type\":\"TASK_STARTED\",\"timestamp\":\"2026-01-25T16:00:01.123Z\",\"taskId\":\"lint-001\"}\n{\"type\":\"TASK_STARTED\",\"timestamp\":\"2026-01-25T16:00:01.125Z\",\"taskId\":\"unit-tests-001\"}\n{\"type\":\"TASK_COMPLETED\",\"timestamp\":\"2026-01-25T16:00:01.923Z\",\"taskId\":\"lint-001\",\"exitCode\":0}\n{\"type\":\"TASK_COMPLETED\",\"timestamp\":\"2026-01-25T16:00:02.325Z\",\"taskId\":\"unit-tests-001\",\"exitCode\":0}\n```\n\nNotice how both tasks started almost simultaneously (2ms apart), demonstrating parallel execution.\n\n---\n\n## Step 12: Customizing for Your Project\n\nNow that you understand the structure, here's how to customize the process for your own needs:\n\n### Adding a New Task\n\n1. Define the task in `tasks.js`:\n\n```javascript\nexport const e2eTestTask = {\n type: 'shell',\n name: 'e2e-tests',\n description: 'Run end-to-end tests with Playwright',\n command: 'npx playwright test',\n timeout: 600000, // 10 minutes\n retries: 2,\n};\n```\n\n2. Import and use in `build-deploy.js`:\n\n```javascript\nimport { e2eTestTask } from './tasks.js';\n\n// Add after integration tests\nconst e2eResult = await ctx.task(e2eTestTask, {});\n```\n\n### Adding a Conditional Breakpoint\n\n```javascript\n// Only require approval for changes to sensitive files\nif (changesIncludeSensitiveFiles) {\n await ctx.breakpoint({\n question: 'Sensitive files modified. Security review required.',\n title: 'Security Review Required',\n severity: 'critical',\n });\n}\n```\n\n### Adding Environment-Specific Logic\n\n```javascript\nif (environment === 'production') {\n // Additional production-only checks\n await ctx.task(productionReadinessCheck, {});\n} else if (environment === 'staging') {\n // Staging-specific setup\n await ctx.task(seedTestData, {});\n}\n```\n\n---\n\n## Summary\n\nCongratulations! You have successfully created a custom Babysitter process definition. Let's review what you accomplished:\n\n### What You Built\n\n- A complete **build and deploy pipeline** with 7 phases\n- **Parallel execution** of lint and unit tests\n- **Three strategic breakpoints** for human approval\n- **Quality assessment** using an agent task\n- **Parameterized inputs** for flexibility\n\n### Key Concepts Learned\n\n| Concept | What You Learned |\n|---------|------------------|\n| **Process Definition** | JavaScript function that orchestrates tasks using context API |\n| **Parallel Execution** | Run independent tasks simultaneously with `ctx.parallel.all()` |\n| **Breakpoints** | Human approval gates with context and severity levels |\n| **State Management** | Store and retrieve values with `ctx.setState/getState` |\n| **Task Types** | Shell tasks for commands, Agent tasks for LLM analysis |\n| **Parameterization** | Make processes flexible with input parameters |\n\n### Process Definition Best Practices\n\n1. **Name tasks descriptively** - Use clear names like `lint-task` not `task1`\n2. **Set appropriate timeouts** - Prevent hanging tasks from blocking workflows\n3. **Use parallel execution wisely** - Only for truly independent tasks\n4. **Place breakpoints strategically** - At irreversible actions (deployments)\n5. **Log progress frequently** - Use `ctx.log()` for visibility\n6. **Handle failures gracefully** - Return meaningful error information\n7. **Parameterize configurable values** - Environments, thresholds, feature flags\n\n---\n\n## Next Steps\n\nNow that you've mastered custom process definitions, here are paths to continue:\n\n### Continue Learning\n- **[Advanced Tutorial: Multi-Phase Feature Development](./advanced-multi-phase.md)** - Team workflows with agent scoring and quality convergence\n\n### Go Deeper\n- **[Process Engine Architecture](../features/process-definitions.md)** - Understand how processes execute\n- **[Task Types Reference](../reference/cli-reference.md)** - Complete reference for all task types\n\n### Apply Your Knowledge\n- **[How to Create Team Templates](../features/process-definitions.md)** - Share processes across your team\n- **[Quality Convergence](../features/quality-convergence.md)** - Custom quality evaluators\n\n---\n\n## Troubleshooting\n\n### Issue: \"Process file not found\"\n\n**Symptom:** Babysitter can't find your process definition.\n\n**Solution:**\n1. Verify file path: `.a5c/processes/build-deploy.js`\n2. Check manifest: `.a5c/processes/manifest.json` lists the process\n3. Ensure `\"type\": \"module\"` in package.json for ES modules\n\n### Issue: \"Parallel tasks not running in parallel\"\n\n**Symptom:** Tasks appear to run sequentially despite `ctx.parallel.all()`.\n\n**Solution:**\n1. Verify you're using the array of functions pattern:\n ```javascript\n // Correct\n await ctx.parallel.all([\n () => ctx.task(task1, {}),\n () => ctx.task(task2, {}),\n ]);\n\n // Incorrect - this runs sequentially!\n await ctx.parallel.all([\n ctx.task(task1, {}), // Missing arrow function\n ctx.task(task2, {}),\n ]);\n ```\n\n### Issue: \"Task timeout\"\n\n**Symptom:** Task fails with timeout error.\n\n**Solution:**\n1. Increase the timeout in task definition\n2. Check if the underlying command is actually hanging\n3. Consider breaking into smaller tasks\n\n---\n\n## See Also\n\n- [CLI Reference](../reference/cli-reference.md) - Running processes via CLI\n- [Configuration Reference](../reference/configuration.md) - All configuration options\n- [Breakpoints](../features/breakpoints.md) - How breakpoints work\n\n---\n\n**Document Status:** Complete\n**Last Updated:** 2026-01-25\n**Feedback:** Found an issue? [Report it on GitHub](https://github.com/a5c-ai/babysitter/issues)\n",
"documents": []
},
"outgoingEdges": [],
"incomingEdges": []
}