Testing
Comprehensive testing guidelines and procedures for SOLVEFORCE projects, ensuring quality, reliability, and maintainability across all systems and documentation.
π― Testing Philosophy
π Core Principles
Quality Assurance:
- Test early and test often throughout development
- Automated testing for consistent and repeatable results
- Comprehensive coverage of critical business logic
- Performance and security testing integration
- Continuous feedback loops for rapid improvement
Testing Pyramid:
/\
/ \ E2E Tests (Few)
/____\
/ \ Integration Tests (Some)
/________\ Unit Tests (Many)
Test Types:
- Unit Tests: Individual component functionality
- Integration Tests: Component interaction validation
- End-to-End Tests: Complete user workflow testing
- Performance Tests: Load and stress testing
- Security Tests: Vulnerability and penetration testing
π§ͺ Unit Testing Standards
β‘ Test Structure
AAA Pattern Implementation:
import pytest
from unittest.mock import Mock, patch
from src.services.connectivity import FiberService
from src.exceptions import ValidationError
class TestFiberService:
def setup_method(self):
"""Set up test fixtures before each test method."""
self.fiber_service = FiberService()
self.mock_pricing_api = Mock()
self.valid_service_request = {
'location': 'Dallas, TX',
'bandwidth': 1000,
'service_type': 'dedicated',
'term_length': 36
}
def test_calculate_fiber_pricing_with_valid_request_returns_quote(self):
"""Test fiber pricing calculation with valid service request."""
# Arrange
expected_monthly_cost = 850.00
expected_setup_cost = 2500.00
self.mock_pricing_api.calculate_fiber_cost.return_value = {
'monthly_recurring': expected_monthly_cost,
'one_time_setup': expected_setup_cost,
'currency': 'USD'
}
# Act
with patch.object(self.fiber_service, 'pricing_api', self.mock_pricing_api):
result = self.fiber_service.get_pricing_quote(self.valid_service_request)
# Assert
assert result['monthly_recurring'] == expected_monthly_cost
assert result['one_time_setup'] == expected_setup_cost
assert result['currency'] == 'USD'
self.mock_pricing_api.calculate_fiber_cost.assert_called_once_with(
location='Dallas, TX',
bandwidth=1000,
service_type='dedicated',
term_length=36
)
def test_calculate_fiber_pricing_with_invalid_bandwidth_raises_validation_error(self):
"""Test pricing calculation fails with invalid bandwidth."""
# Arrange
invalid_request = self.valid_service_request.copy()
invalid_request['bandwidth'] = -100 # Invalid negative bandwidth
# Act & Assert
with pytest.raises(ValidationError) as exc_info:
self.fiber_service.get_pricing_quote(invalid_request)
assert 'bandwidth must be positive' in str(exc_info.value)
@pytest.mark.parametrize("bandwidth,expected_tier", [
(100, 'basic'),
(1000, 'standard'),
(10000, 'premium'),
(100000, 'enterprise')
])
def test_determine_service_tier_returns_correct_tier_for_bandwidth(self, bandwidth, expected_tier):
"""Test service tier determination based on bandwidth requirements."""
# Act
result = self.fiber_service.determine_service_tier(bandwidth)
# Assert
assert result == expected_tier
Test Naming Conventions:
# Good test names - descriptive and specific
def test_create_service_with_valid_data_returns_service_id():
def test_calculate_monthly_cost_with_discount_applies_percentage_correctly():
def test_validate_location_with_unsupported_area_raises_location_error():
# Bad test names - vague and unclear
def test_service():
def test_calculation():
def test_validation():
π Test Coverage Requirements
Coverage Targets:
- Critical Business Logic: 95-100% coverage
- API Endpoints: 90-95% coverage
- Utility Functions: 85-90% coverage
- Configuration Code: 70-80% coverage
- Overall Project: Minimum 80% coverage
Coverage Configuration:
# .coveragerc
[run]
source = src/
omit =
*/tests/*
*/venv/*
*/migrations/*
*/settings/*
*/__init__.py
[report]
exclude_lines =
pragma: no cover
def __repr__
if self.debug:
if settings.DEBUG
raise AssertionError
raise NotImplementedError
if 0:
if __name__ == .__main__.:
[html]
directory = htmlcov
Running Coverage Analysis:
# Generate coverage report
pytest --cov=src --cov-report=html --cov-report=term-missing --cov-report=xml
# View HTML coverage report
open htmlcov/index.html
# Check coverage thresholds
pytest --cov=src --cov-fail-under=80
π Integration Testing
π API Integration Tests
REST API Testing:
import pytest
import requests
from fastapi.testclient import TestClient
from src.main import app
class TestServiceAPI:
def setup_method(self):
"""Set up test client and authentication."""
self.client = TestClient(app)
self.auth_headers = {
'Authorization': 'Bearer test_token_12345',
'Content-Type': 'application/json'
}
def test_create_service_endpoint_with_valid_data_returns_201(self):
"""Test service creation API endpoint with valid request."""
# Arrange
service_data = {
'name': 'Test Fiber Service',
'type': 'fiber',
'bandwidth': 1000,
'location': 'Dallas, TX',
'term_length': 36
}
# Act
response = self.client.post(
'/api/v1/services',
json=service_data,
headers=self.auth_headers
)
# Assert
assert response.status_code == 201
response_data = response.json()
assert 'service_id' in response_data
assert response_data['status'] == 'provisioning'
assert response_data['name'] == service_data['name']
def test_get_service_pricing_endpoint_returns_accurate_quote(self):
"""Test pricing endpoint returns accurate service quotes."""
# Arrange
pricing_request = {
'service_type': 'fiber',
'bandwidth': 1000,
'location': 'Dallas, TX',
'term_length': 36
}
# Act
response = self.client.post(
'/api/v1/pricing/quote',
json=pricing_request,
headers=self.auth_headers
)
# Assert
assert response.status_code == 200
quote_data = response.json()
assert 'monthly_recurring' in quote_data
assert 'one_time_setup' in quote_data
assert quote_data['monthly_recurring'] > 0
assert quote_data['currency'] == 'USD'
def test_unauthorized_request_returns_401(self):
"""Test API endpoints require valid authentication."""
# Act
response = self.client.get('/api/v1/services')
# Assert
assert response.status_code == 401
assert 'authentication required' in response.json()['error'].lower()
ποΈ Database Integration Tests
Database Testing with Fixtures:
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.database.models import Base, Service, Customer
from src.database.connection import get_db_session
@pytest.fixture(scope='session')
def test_database():
"""Create test database for integration tests."""
engine = create_engine('sqlite:///test_solveforce.db')
Base.metadata.create_all(engine)
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
yield TestingSessionLocal
# Cleanup
Base.metadata.drop_all(engine)
@pytest.fixture
def db_session(test_database):
"""Provide database session for each test."""
session = test_database()
try:
yield session
finally:
session.rollback()
session.close()
class TestServiceDatabase:
def test_create_service_record_persists_to_database(self, db_session):
"""Test service creation persists correctly to database."""
# Arrange
customer = Customer(
name='Test Company',
email='test@company.com',
phone='555-0123'
)
db_session.add(customer)
db_session.commit()
service = Service(
customer_id=customer.id,
name='Test Fiber Service',
service_type='fiber',
bandwidth=1000,
location='Dallas, TX',
monthly_cost=850.00,
status='active'
)
# Act
db_session.add(service)
db_session.commit()
# Assert
retrieved_service = db_session.query(Service).filter_by(name='Test Fiber Service').first()
assert retrieved_service is not None
assert retrieved_service.bandwidth == 1000
assert retrieved_service.monthly_cost == 850.00
assert retrieved_service.customer.name == 'Test Company'
π End-to-End Testing
π User Journey Testing
Selenium WebDriver Tests:
import pytest
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
class TestServicePortalE2E:
@pytest.fixture(autouse=True)
def setup_browser(self):
"""Set up browser for each test."""
chrome_options = Options()
chrome_options.add_argument('--headless') # Run in CI/CD
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
self.driver = webdriver.Chrome(options=chrome_options)
self.driver.implicitly_wait(10)
self.wait = WebDriverWait(self.driver, 10)
yield
self.driver.quit()
def test_complete_service_request_workflow(self):
"""Test complete user journey from login to service request."""
# Step 1: Navigate to portal
self.driver.get('https://portal.solveforce.com')
# Step 2: Login
username_field = self.wait.until(
EC.presence_of_element_located((By.ID, 'username'))
)
password_field = self.driver.find_element(By.ID, 'password')
login_button = self.driver.find_element(By.ID, 'login-btn')
username_field.send_keys('test@company.com')
password_field.send_keys('test_password')
login_button.click()
# Step 3: Navigate to service request
self.wait.until(
EC.presence_of_element_located((By.ID, 'dashboard'))
)
new_service_btn = self.driver.find_element(By.ID, 'new-service-btn')
new_service_btn.click()
# Step 4: Fill service request form
service_type = self.wait.until(
EC.presence_of_element_located((By.ID, 'service-type'))
)
service_type.send_keys('Fiber Internet')
bandwidth = self.driver.find_element(By.ID, 'bandwidth')
bandwidth.send_keys('1000')
location = self.driver.find_element(By.ID, 'location')
location.send_keys('123 Main St, Dallas, TX 75201')
submit_btn = self.driver.find_element(By.ID, 'submit-request')
submit_btn.click()
# Step 5: Verify request submission
success_message = self.wait.until(
EC.presence_of_element_located((By.CLASS_NAME, 'success-message'))
)
assert 'Service request submitted successfully' in success_message.text
# Step 6: Verify request appears in dashboard
dashboard_link = self.driver.find_element(By.ID, 'dashboard-link')
dashboard_link.click()
pending_requests = self.wait.until(
EC.presence_of_element_located((By.CLASS_NAME, 'pending-requests'))
)
assert 'Fiber Internet' in pending_requests.text
π± Mobile Responsiveness Testing
Cross-Device Testing:
import pytest
from selenium import webdriver
class TestMobileResponsiveness:
@pytest.mark.parametrize("device_size", [
(375, 667), # iPhone SE
(414, 896), # iPhone XR
(768, 1024), # iPad
(1920, 1080) # Desktop
])
def test_service_portal_responsive_design(self, device_size):
"""Test portal displays correctly across different screen sizes."""
width, height = device_size
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
driver = webdriver.Chrome(options=chrome_options)
try:
driver.set_window_size(width, height)
driver.get('https://portal.solveforce.com')
# Check navigation is accessible
nav_menu = driver.find_element(By.CLASS_NAME, 'navigation')
assert nav_menu.is_displayed()
# Check main content is visible
main_content = driver.find_element(By.ID, 'main-content')
assert main_content.is_displayed()
# Check footer is present
footer = driver.find_element(By.TAG_NAME, 'footer')
assert footer.is_displayed()
finally:
driver.quit()
β‘ Performance Testing
π Load Testing
API Load Testing with Locust:
# locustfile.py
from locust import HttpUser, task, between
import random
class ServicePortalUser(HttpUser):
wait_time = between(1, 5)
def on_start(self):
"""Login user at start of test."""
response = self.client.post('/api/v1/auth/login', json={
'username': 'load_test_user@company.com',
'password': 'test_password'
})
if response.status_code == 200:
self.token = response.json()['access_token']
self.headers = {'Authorization': f'Bearer {self.token}'}
@task(3)
def view_services(self):
"""Simulate viewing services list."""
self.client.get('/api/v1/services', headers=self.headers)
@task(2)
def get_pricing_quote(self):
"""Simulate getting pricing quotes."""
pricing_data = {
'service_type': random.choice(['fiber', 'ethernet', 'wireless']),
'bandwidth': random.choice([100, 500, 1000, 10000]),
'location': random.choice(['Dallas, TX', 'Austin, TX', 'Houston, TX']),
'term_length': random.choice([12, 24, 36])
}
self.client.post('/api/v1/pricing/quote',
json=pricing_data,
headers=self.headers)
@task(1)
def create_service_request(self):
"""Simulate creating service requests."""
service_data = {
'name': f'Load Test Service {random.randint(1000, 9999)}',
'type': 'fiber',
'bandwidth': 1000,
'location': 'Dallas, TX',
'term_length': 36
}
self.client.post('/api/v1/services',
json=service_data,
headers=self.headers)
# Run load test
# locust -f locustfile.py --host=https://api.solveforce.com
Performance Benchmarking:
import time
import statistics
from concurrent.futures import ThreadPoolExecutor
import requests
class PerformanceBenchmark:
def __init__(self, base_url, auth_token):
self.base_url = base_url
self.headers = {'Authorization': f'Bearer {auth_token}'}
def benchmark_endpoint(self, endpoint, num_requests=100, concurrent_users=10):
"""Benchmark API endpoint performance."""
response_times = []
def make_request():
start_time = time.time()
response = requests.get(f'{self.base_url}{endpoint}', headers=self.headers)
end_time = time.time()
return {
'response_time': end_time - start_time,
'status_code': response.status_code,
'success': response.status_code == 200
}
# Execute concurrent requests
with ThreadPoolExecutor(max_workers=concurrent_users) as executor:
futures = [executor.submit(make_request) for _ in range(num_requests)]
results = [future.result() for future in futures]
# Calculate statistics
response_times = [r['response_time'] for r in results]
success_rate = sum(1 for r in results if r['success']) / len(results)
return {
'endpoint': endpoint,
'num_requests': num_requests,
'concurrent_users': concurrent_users,
'success_rate': success_rate,
'avg_response_time': statistics.mean(response_times),
'median_response_time': statistics.median(response_times),
'p95_response_time': statistics.quantiles(response_times, n=20)[18],
'p99_response_time': statistics.quantiles(response_times, n=100)[98],
'min_response_time': min(response_times),
'max_response_time': max(response_times)
}
# Usage
benchmark = PerformanceBenchmark('https://api.solveforce.com', 'test_token')
results = benchmark.benchmark_endpoint('/api/v1/services')
print(f"Average response time: {results['avg_response_time']:.3f}s")
print(f"95th percentile: {results['p95_response_time']:.3f}s")
print(f"Success rate: {results['success_rate']:.1%}")
π Security Testing
π‘οΈ Vulnerability Testing
Security Test Cases:
import requests
import pytest
from urllib.parse import quote
class TestAPISecurity:
def setup_method(self):
self.base_url = 'https://api.solveforce.com/v1'
self.valid_token = 'valid_test_token'
self.invalid_token = 'invalid_token_12345'
def test_unauthorized_access_returns_401(self):
"""Test API endpoints reject unauthorized requests."""
endpoints = [
'/services',
'/billing/invoices',
'/support/tickets',
'/analytics/usage'
]
for endpoint in endpoints:
response = requests.get(f'{self.base_url}{endpoint}')
assert response.status_code == 401, f"Endpoint {endpoint} should require authentication"
def test_invalid_token_returns_401(self):
"""Test API rejects invalid authentication tokens."""
headers = {'Authorization': f'Bearer {self.invalid_token}'}
response = requests.get(f'{self.base_url}/services', headers=headers)
assert response.status_code == 401
def test_sql_injection_protection(self):
"""Test API protects against SQL injection attacks."""
sql_payloads = [
"'; DROP TABLE services; --",
"' OR '1'='1",
"'; SELECT * FROM users; --",
"' UNION SELECT * FROM customers --"
]
headers = {'Authorization': f'Bearer {self.valid_token}'}
for payload in sql_payloads:
# Test in query parameters
response = requests.get(
f'{self.base_url}/services',
params={'search': payload},
headers=headers
)
assert response.status_code != 500, f"SQL injection payload caused server error: {payload}"
# Test in POST data
response = requests.post(
f'{self.base_url}/services',
json={'name': payload},
headers=headers
)
assert response.status_code in [400, 422], f"SQL injection not properly validated: {payload}"
def test_xss_protection(self):
"""Test API protects against XSS attacks."""
xss_payloads = [
"<script>alert('xss')</script>",
"javascript:alert('xss')",
"<img src=x onerror=alert('xss')>",
"';alert('xss');//"
]
headers = {'Authorization': f'Bearer {self.valid_token}'}
for payload in xss_payloads:
response = requests.post(
f'{self.base_url}/support/tickets',
json={
'subject': payload,
'description': f'Test ticket with payload: {payload}'
},
headers=headers
)
# Check response doesn't contain unescaped payload
if response.status_code == 201:
response_text = response.text.lower()
assert '<script>' not in response_text
assert 'javascript:' not in response_text
def test_rate_limiting_enforced(self):
"""Test API enforces rate limiting."""
headers = {'Authorization': f'Bearer {self.valid_token}'}
# Make rapid requests to trigger rate limit
responses = []
for i in range(20):
response = requests.get(f'{self.base_url}/services', headers=headers)
responses.append(response.status_code)
# Should eventually hit rate limit
assert 429 in responses, "Rate limiting not enforced"
π Authentication Testing
OAuth 2.0 Flow Testing:
class TestOAuthAuthentication:
def test_oauth_authorization_code_flow(self):
"""Test complete OAuth 2.0 authorization code flow."""
# Step 1: Get authorization code
auth_response = requests.get(
'https://auth.solveforce.com/oauth/authorize',
params={
'client_id': 'test_client_id',
'response_type': 'code',
'scope': 'services:read billing:read',
'redirect_uri': 'https://test.app/callback'
}
)
assert auth_response.status_code == 200
# Step 2: Exchange code for tokens (mocked for testing)
token_response = requests.post(
'https://auth.solveforce.com/oauth/token',
data={
'grant_type': 'authorization_code',
'code': 'test_authorization_code',
'client_id': 'test_client_id',
'client_secret': 'test_client_secret',
'redirect_uri': 'https://test.app/callback'
}
)
assert token_response.status_code == 200
token_data = token_response.json()
assert 'access_token' in token_data
assert 'refresh_token' in token_data
assert 'expires_in' in token_data
def test_token_refresh_flow(self):
"""Test OAuth token refresh mechanism."""
refresh_response = requests.post(
'https://auth.solveforce.com/oauth/token',
data={
'grant_type': 'refresh_token',
'refresh_token': 'test_refresh_token',
'client_id': 'test_client_id',
'client_secret': 'test_client_secret'
}
)
assert refresh_response.status_code == 200
token_data = refresh_response.json()
assert 'access_token' in token_data
assert 'expires_in' in token_data
π Documentation Testing
π Content Validation
Link Checking:
import requests
import re
from pathlib import Path
class DocumentationTester:
def __init__(self, docs_path):
self.docs_path = Path(docs_path)
self.broken_links = []
def test_all_links(self):
"""Test all links in documentation files."""
markdown_files = self.docs_path.glob('**/*.md')
for file_path in markdown_files:
self.test_links_in_file(file_path)
if self.broken_links:
raise AssertionError(f"Found {len(self.broken_links)} broken links: {self.broken_links}")
def test_links_in_file(self, file_path):
"""Test all links in a specific markdown file."""
content = file_path.read_text(encoding='utf-8')
# Find markdown links [text](url)
link_pattern = r'\[([^\]]+)\]\(([^)]+)\)'
links = re.findall(link_pattern, content)
for link_text, url in links:
if url.startswith('http'):
# External link
self.test_external_link(url, file_path)
elif url.endswith('.md'):
# Internal documentation link
self.test_internal_link(url, file_path)
def test_external_link(self, url, source_file):
"""Test external HTTP links."""
try:
response = requests.head(url, timeout=10, allow_redirects=True)
if response.status_code >= 400:
self.broken_links.append(f"{url} in {source_file}")
except requests.RequestException:
self.broken_links.append(f"{url} in {source_file} (connection error)")
def test_internal_link(self, url, source_file):
"""Test internal markdown links."""
if url.startswith('./'):
# Relative link
target_path = source_file.parent / url[2:]
else:
# Absolute link from docs root
target_path = self.docs_path / url
if not target_path.exists():
self.broken_links.append(f"{url} in {source_file} (file not found)")
# Usage in pytest
def test_documentation_links():
"""Test all documentation links are valid."""
tester = DocumentationTester('src/')
tester.test_all_links()
Content Formatting Tests:
import re
from pathlib import Path
class ContentFormatTester:
def test_markdown_formatting(self):
"""Test markdown files follow formatting standards."""
docs_path = Path('src/')
markdown_files = docs_path.glob('**/*.md')
for file_path in markdown_files:
content = file_path.read_text(encoding='utf-8')
# Test file has proper title structure
lines = content.split('\n')
assert lines[0].startswith('# '), f"{file_path} should start with H1 title"
# Test no trailing whitespace
for i, line in enumerate(lines):
assert not line.endswith(' '), f"{file_path}:{i+1} has trailing whitespace"
# Test proper link formatting
bad_links = re.findall(r'\[.*\]\((?!http|\./).*\)', content)
assert not bad_links, f"{file_path} has improperly formatted links: {bad_links}"
def test_code_block_syntax(self):
"""Test code blocks have proper syntax highlighting."""
docs_path = Path('src/')
markdown_files = docs_path.glob('**/*.md')
for file_path in markdown_files:
content = file_path.read_text(encoding='utf-8')
# Find code blocks
code_blocks = re.findall(r'```(\w+)?\n(.*?)```', content, re.DOTALL)
for language, code in code_blocks:
if not language and len(code.strip()) > 20:
# Long code blocks should have language specified
print(f"Warning: {file_path} has code block without language specification")
π Continuous Integration Testing
ποΈ GitHub Actions Workflow
Comprehensive Test Pipeline:
# .github/workflows/test.yml
name: Comprehensive Test Suite
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
unit-tests:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, '3.10', '3.11']
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run unit tests
run: |
pytest tests/unit/ --cov=src --cov-report=xml --cov-report=html
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
integration-tests:
runs-on: ubuntu-latest
needs: unit-tests
services:
postgres:
image: postgres:13
env:
POSTGRES_PASSWORD: test_password
POSTGRES_DB: test_solveforce
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run integration tests
env:
DATABASE_URL: postgresql://postgres:test_password@localhost:5432/test_solveforce
run: |
pytest tests/integration/ -v
documentation-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup mdBook
uses: peaceiris/actions-mdbook@v1
with:
mdbook-version: '0.4.40'
- name: Test documentation build
run: |
mdbook build
- name: Test markdown formatting
run: |
npm install -g markdownlint-cli
markdownlint src/**/*.md
- name: Test documentation links
run: |
pip install requests
python scripts/test_documentation_links.py
security-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run security linting
run: |
pip install bandit safety
bandit -r src/
safety check --json
- name: Run dependency security scan
uses: pypa/gh-action-pip-audit@v1.0.8
with:
inputs: requirements.txt
performance-tests:
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Run load tests
run: |
pip install locust
locust -f tests/performance/locustfile.py --host https://api.solveforce.com --headless -u 10 -r 2 -t 60s
π Test Reporting
Test Results Dashboard:
# scripts/generate_test_report.py
import json
import xml.etree.ElementTree as ET
from datetime import datetime
from pathlib import Path
class TestReportGenerator:
def __init__(self):
self.results = {
'timestamp': datetime.now().isoformat(),
'summary': {},
'unit_tests': {},
'integration_tests': {},
'coverage': {},
'performance': {}
}
def parse_pytest_xml(self, xml_path):
"""Parse pytest XML results."""
tree = ET.parse(xml_path)
root = tree.getroot()
total_tests = int(root.get('tests', 0))
failures = int(root.get('failures', 0))
errors = int(root.get('errors', 0))
skipped = int(root.get('skipped', 0))
self.results['unit_tests'] = {
'total': total_tests,
'passed': total_tests - failures - errors - skipped,
'failed': failures,
'errors': errors,
'skipped': skipped,
'success_rate': ((total_tests - failures - errors) / total_tests * 100) if total_tests > 0 else 0
}
def parse_coverage_xml(self, xml_path):
"""Parse coverage XML results."""
tree = ET.parse(xml_path)
root = tree.getroot()
line_rate = float(root.get('line-rate', 0)) * 100
branch_rate = float(root.get('branch-rate', 0)) * 100
self.results['coverage'] = {
'line_coverage': round(line_rate, 2),
'branch_coverage': round(branch_rate, 2),
'overall_coverage': round((line_rate + branch_rate) / 2, 2)
}
def generate_html_report(self, output_path):
"""Generate HTML test report."""
html_template = """
<!DOCTYPE html>
<html>
<head>
<title>SOLVEFORCE Test Report</title>
<style>
body { font-family: Arial, sans-serif; margin: 40px; }
.summary { background: #f5f5f5; padding: 20px; border-radius: 5px; }
.metric { display: inline-block; margin: 10px; padding: 10px; background: white; border-radius: 3px; }
.pass { color: green; }
.fail { color: red; }
.coverage { color: blue; }
</style>
</head>
<body>
<h1>SOLVEFORCE Test Report</h1>
<p>Generated: {timestamp}</p>
<div class="summary">
<h2>Test Summary</h2>
<div class="metric">
<strong>Unit Tests:</strong>
<span class="pass">{unit_passed} passed</span>,
<span class="fail">{unit_failed} failed</span>
</div>
<div class="metric">
<strong>Coverage:</strong>
<span class="coverage">{coverage}%</span>
</div>
</div>
</body>
</html>
""".format(
timestamp=self.results['timestamp'],
unit_passed=self.results['unit_tests']['passed'],
unit_failed=self.results['unit_tests']['failed'],
coverage=self.results['coverage']['overall_coverage']
)
Path(output_path).write_text(html_template)
# Usage
if __name__ == '__main__':
generator = TestReportGenerator()
generator.parse_pytest_xml('test-results.xml')
generator.parse_coverage_xml('coverage.xml')
generator.generate_html_report('test-report.html')
π Testing Support
π§ Testing Resources
Documentation and Help:
- Testing Guide: This comprehensive document
- Best Practices: Internal testing standards wiki
- Code Examples: Sample test implementations
- CI/CD Pipeline: Automated testing workflows
Tools and Frameworks:
- pytest: Primary testing framework
- Selenium: Web browser automation
- Locust: Performance and load testing
- Coverage.py: Code coverage analysis
- Bandit: Security vulnerability scanning
π― Test Planning
Test Strategy Development:
- Risk Assessment: Identify critical system components
- Test Scope: Define what will and won't be tested
- Test Approach: Choose appropriate testing methods
- Resource Planning: Allocate time and personnel
- Success Criteria: Define quality gates and metrics
Testing Metrics:
- Test coverage percentage
- Test execution time
- Defect detection rate
- Test maintenance effort
- Performance benchmarks
Comprehensive testing ensures SOLVEFORCE delivers reliable, secure, and high-performing solutions for our customers.
Test Early, Test Often, Test Well β SOLVEFORCE Quality Through Testing Excellence.