Files
leptos-shadcn-ui/docs/contributing/adr/002-testing-pyramid-strategy.md
Peter Hanssens 0988aed57e Release v0.8.1: Major infrastructure improvements and cleanup
- Complete documentation reorganization into professional structure
- Achieved 90%+ test coverage across all components
- Created sophisticated WASM demo matching shadcn/ui quality
- Fixed all compilation warnings and missing binary files
- Optimized dependencies across all packages
- Professional code standards and performance optimizations
- Cross-browser compatibility with Playwright testing
- New York variants implementation
- Advanced signal management for Leptos 0.8.8+
- Enhanced testing infrastructure with TDD approach
2025-09-16 22:14:20 +10:00

6.5 KiB

ADR-002: Testing Pyramid Strategy

Status

ACCEPTED - 2024-09-08

Context

To maintain the highest quality standards and achieve near-100% test coverage, we need a comprehensive testing strategy that covers all aspects of our applications from unit tests to end-to-end testing.

Decision

We implement a full testing pyramid with multiple layers of testing to ensure comprehensive coverage and quality assurance.

Testing Pyramid Structure

1. Unit Tests (70% of tests)

  • Purpose: Test individual functions, methods, and components in isolation
  • Scope: Fast, focused tests for specific functionality
  • Tools: Rust built-in testing, cargo test
  • Coverage: Every public function and method

2. Integration Tests (20% of tests)

  • Purpose: Test interactions between components and modules
  • Scope: Test component integration and data flow
  • Tools: Rust integration tests, wasm-bindgen-test
  • Coverage: API endpoints, database interactions, external service integration

3. End-to-End Tests (10% of tests)

  • Purpose: Test complete user workflows and application behavior
  • Scope: Full application testing from user perspective
  • Tools: Playwright, browser automation
  • Coverage: Critical user journeys and business workflows

Testing Standards

Coverage Requirements

  • Minimum coverage: 95% line coverage
  • Branch coverage: 90% branch coverage
  • Function coverage: 100% public function coverage
  • Integration coverage: 100% API endpoint coverage

Test Quality Standards

  • Fast execution: Unit tests must run in <1ms each
  • Isolated: Tests must not depend on external state
  • Deterministic: Tests must produce consistent results
  • Readable: Tests must clearly express intent and expected behavior

Implementation

Unit Testing Strategy

// Example: Data processing unit tests
#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_data_validation() {
        let validator = DataValidator::new();
        assert!(validator.validate("valid_data"));
        assert!(!validator.validate("invalid_data"));
    }

    #[test]
    fn test_data_transformation() {
        let transformer = DataTransformer::new();
        let input = vec![1, 2, 3];
        let expected = vec![2, 4, 6];
        assert_eq!(transformer.double(input), expected);
    }
}

Integration Testing Strategy

// Example: API integration tests
#[cfg(test)]
mod integration_tests {
    use super::*;

    #[tokio::test]
    async fn test_api_endpoint() {
        let app = create_test_app().await;
        let response = app.post("/api/data")
            .json(&test_data())
            .send()
            .await
            .unwrap();
        
        assert_eq!(response.status(), 200);
        let result: ApiResponse = response.json().await.unwrap();
        assert!(result.success);
    }
}

End-to-End Testing Strategy

// Example: Playwright E2E tests
test.describe('Data Pipeline Workflow', () => {
  test('should complete full data processing workflow', async ({ page }) => {
    // Navigate to application
    await page.goto('/');
    
    // Upload data
    await page.setInputFiles('[data-testid="file-upload"]', 'test-data.csv');
    
    // Configure pipeline
    await page.selectOption('[data-testid="processor-type"]', 'rust-processor');
    await page.fill('[data-testid="output-format"]', 'json');
    
    // Execute pipeline
    await page.click('[data-testid="start-processing"]');
    
    // Verify results
    await expect(page.locator('[data-testid="processing-status"]'))
      .toContainText('Complete');
    await expect(page.locator('[data-testid="download-link"]'))
      .toBeVisible();
  });
});

Test Organization

tests/
├── unit/                    # Unit tests
│   ├── data_processing.rs
│   ├── validation.rs
│   └── transformation.rs
├── integration/             # Integration tests
│   ├── api_tests.rs
│   ├── database_tests.rs
│   └── service_tests.rs
└── e2e/                     # End-to-end tests
    ├── user_workflows.spec.ts
    ├── data_pipeline.spec.ts
    └── performance.spec.ts

Quality Gates

Pre-commit Hooks

  • Run unit tests
  • Check code coverage
  • Lint and format code
  • Security vulnerability scan

CI/CD Pipeline

  1. Unit tests: Run on every commit
  2. Integration tests: Run on pull requests
  3. E2E tests: Run on main branch
  4. Coverage report: Generate and enforce coverage thresholds
  5. Performance tests: Run benchmarks and regression tests

Coverage Monitoring

  • Real-time coverage: Track coverage during development
  • Coverage reports: Generate detailed coverage reports
  • Coverage trends: Monitor coverage trends over time
  • Coverage alerts: Alert when coverage drops below thresholds

Tools and Technologies

Rust Testing

  • cargo test: Built-in testing framework
  • criterion: Benchmarking and performance testing
  • tarpaulin: Code coverage analysis
  • mockall: Mocking framework for unit tests

Web Testing

  • Playwright: End-to-end testing
  • wasm-bindgen-test: WebAssembly testing
  • js-sys: JavaScript interop testing
  • web-sys: Web API testing

CI/CD Integration

  • GitHub Actions: Automated testing pipeline
  • Coverage reporting: Integration with coverage services
  • Test result reporting: Detailed test result analysis
  • Performance monitoring: Continuous performance testing

Metrics and Monitoring

Test Metrics

  • Test execution time: Track test suite performance
  • Test coverage: Monitor coverage percentages
  • Test reliability: Track flaky test identification
  • Test maintenance: Monitor test maintenance overhead

Quality Metrics

  • Bug escape rate: Measure bugs found in production
  • Mean time to detection: Track time to find issues
  • Mean time to resolution: Track time to fix issues
  • Customer satisfaction: Monitor client feedback on quality

Review and Updates

This ADR will be reviewed monthly to ensure testing strategy remains effective and aligned with project needs. Updates will be made based on:

  • Team feedback and experience
  • Industry best practices
  • Tool and technology updates
  • Client requirements and feedback
  • ADR-001: Test-Driven Development (TDD) First Approach
  • ADR-003: Playwright Testing for Demos
  • ADR-004: Code Coverage Standards
  • ADR-005: Performance Testing Strategy