A comprehensive, production-ready test automation framework built with Playwright, Python 3.11+, and Pytest. This framework supports both UI and REST API testing with proper error handling, reporting, and CI/CD integration.
- Features
- Project Structure
- Prerequisites
- Installation
- Running Tests
- Locator Management
- Page Object Model
- Fixtures
- API Testing
- Attach to Running Browser
- Screenshots
- Reporting
- Email Reports
- CI/CD Integration
- Best Practices
- Troubleshooting
- Contributing
- Playwright + Python: Modern browser automation with Python 3.11+
- Pytest Integration: Powerful test framework with fixtures and markers
- Page Object Model: Clean separation of test logic and page interactions
- JSON-Based Locators: Easy-to-maintain locator storage
- REST API Testing: Built-in API client with retry mechanism
- Multiple Browsers: Support for Chromium, Firefox, and WebKit
- Session Management: Login once, reuse across tests
- Attach to Browser: Connect to existing browser for debugging
- Screenshot Capture: Automatic screenshots on failure
- Allure Reporting: Beautiful, interactive test reports
- Email Notifications: Send reports via email
- CI/CD Ready: GitHub Actions, Jenkins, Azure DevOps, AWS CodePipeline
Python-Playwright-Pytest/
├── config/ # Configuration management
│ ├── __init__.py
│ └── settings.py # Pydantic settings
├── locators/ # JSON-based locators
│ ├── __init__.py
│ ├── locator_manager.py # Locator management
│ └── pages/ # Page-specific locators
│ ├── login_page.json
│ └── home_page.json
├── pages/ # Page Object Model
│ ├── __init__.py
│ ├── base_page.py # Base page with common methods
│ ├── login_page.py
│ └── home_page.py
├── tests/ # Test files
│ ├── __init__.py
│ ├── ui/ # UI tests
│ │ ├── __init__.py
│ │ ├── test_login.py
│ │ └── test_home.py
│ └── api/ # API tests
│ ├── __init__.py
│ └── test_api_example.py
├── utils/ # Utility classes
│ ├── __init__.py
│ ├── api_client.py # REST API client
│ ├── screenshot_manager.py # Screenshot handling
│ ├── email_reporter.py # Email notifications
│ └── logger.py # Logging configuration
├── reports/ # Test reports (generated)
├── screenshots/ # Screenshots (generated)
├── traces/ # Playwright traces (generated)
├── logs/ # Log files (generated)
├── .github/workflows/ # GitHub Actions
│ └── test.yml
├── aws-codepipeline/ # AWS CodePipeline
│ └── buildspec.yml
├── conftest.py # Pytest fixtures
├── pytest.ini # Pytest configuration
├── requirements.txt # Python dependencies
├── .env.example # Environment template
├── .gitignore
├── Jenkinsfile # Jenkins pipeline
├── azure-pipelines.yml # Azure DevOps pipeline
└── README.md
- Python 3.11.5 (or higher)
- pip (Python package manager)
- Git (for version control)
- Node.js (required by Playwright)
python3 --version
# Should output: Python 3.11.5 or highergit clone https://github.com/amitbad/Python-Playwright-Pytest.git
cd Python-Playwright-PytestCreating a virtual environment isolates project dependencies from your system Python.
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
# venv\Scripts\activate
# Verify activation (should show venv path)
which pythonTo deactivate the virtual environment:
deactivate# Upgrade pip first
pip install --upgrade pip
# Install all dependencies
pip install -r requirements.txtPlaywright requires browser binaries to be installed:
# Install all browsers
python -m playwright install
# Or install specific browser
playwright install chromium
playwright install firefox
playwright install webkit
# Install with system dependencies (recommended for CI)
playwright install --with-deps chromiumIf you are using a virtual environment, using python -m playwright ... ensures the command runs against the same Python environment as your tests.
# Copy the example environment file
cp .env.example .env
# Edit .env with your configuration
nano .env # or use any text editorImportant .env variables:
# Application URLs
BASE_URL=https://your-app.com
API_BASE_URL=https://api.your-app.com
# Test Credentials
TEST_USERNAME=[email protected]
TEST_PASSWORD=your_secure_password
# Browser Settings
BROWSER=chromium
HEADLESS=true# Run all tests
pytest
# Run with verbose output
pytest -v
# Run with detailed output
pytest -v --tb=long# Run smoke tests only
pytest -m smoke
# Run regression tests
pytest -m regression
# Run API tests only
pytest -m api
# Run UI tests only
pytest -m ui
# Run tests requiring login
pytest -m login_required
# Combine markers
pytest -m "smoke and ui"
pytest -m "not slow"# Run specific test file
pytest tests/ui/test_login.py
# Run specific test class
pytest tests/ui/test_login.py::TestLogin
# Run specific test method
pytest tests/ui/test_login.py::TestLogin::test_successful_login
# Run tests matching pattern
pytest -k "login"
pytest -k "test_get"# Run with Firefox
pytest --browser firefox
# Run with WebKit (Safari)
pytest --browser webkit
# Run with Chromium (default)
pytest --browser chromium# Run with visible browser
pytest --headed
# Run with slow motion (for debugging)
SLOW_MO=500 pytest --headed# Install pytest-xdist
pip install pytest-xdist
# Run tests in parallel
pytest -n auto # Auto-detect CPU count
pytest -n 4 # Use 4 workersLocators are stored in JSON files for easy maintenance. This approach:
- Separates locators from test code
- Allows non-developers to update locators
- Makes locators version-control friendly
locators/locator_manager.py provides a singleton LocatorManager that automatically loads all locator JSON files from:
locators/pages/*.json
Each JSON filename becomes a page name (example: login_page.json -> login_page). The JSON keys become locator names (example: username_input, login_button). On the first LocatorManager() call, all JSON files are read once and cached in memory.
In practice, your Page Objects reference locators by page name + locator key, so selectors stay out of Python test code.
// locators/pages/login_page.json
{
"username_input": {
"selector": "#username",
"type": "css",
"description": "Username input field"
},
"password_input": {
"selector": "#password",
"type": "css",
"description": "Password input field"
},
"login_button": {
"selector": "button[type='submit']",
"type": "css",
"description": "Login submit button"
}
}| Type | Description | Example |
|---|---|---|
css |
CSS selector | #id, .class, [attr='value'] |
xpath |
XPath selector | //button[@type='submit'] |
text |
Text content | Login, Submit |
role |
ARIA role | button:Login |
testid |
Test ID attribute | login-button |
label |
Label text | Username |
placeholder |
Placeholder text | Enter username |
from locators.locator_manager import LocatorManager
lm = LocatorManager()
# Example reads locators/pages/login_page.json
username_info = lm.get_locator("login_page", "username_input")
username_selector = lm.get_selector("login_page", "username_input")get_locator(page, element)returns the full locator dictionary for an element (example keys:selector,type,description). Use this when you need both the selector and how to interpret it.get_selector(page, element)returns only the selector string. Use this when you only need the raw selector.
from pages.base_page import BasePage
class LoginPage(BasePage):
PAGE_NAME = "login_page" # Matches JSON filename
def enter_username(self, username: str):
# Uses locator from login_page.json
self.fill("username_input", username)The framework uses Page Object Model (POM) for maintainable tests.
- Create locator JSON file:
// locators/pages/my_page.json
{
"element_name": {
"selector": "#my-element",
"type": "css",
"description": "Description"
}
}- Create page class:
# pages/my_page.py
from pages.base_page import BasePage
class MyPage(BasePage):
PAGE_NAME = "my_page" # Must match JSON filename
PAGE_URL = "/my-page"
def do_something(self):
self.click("element_name")
self.fill("input_field", "text")- Use in tests:
def test_my_feature(page):
my_page = MyPage(page)
my_page.navigate()
my_page.do_something()| Fixture | Scope | Description |
|---|---|---|
settings |
session | Application settings |
browser |
session | Browser instance |
context |
function | Browser context (isolated) |
page |
function | New page for each test |
authenticated_context |
session | Context with login session |
authenticated_page |
function | Page with authentication |
api_client |
session | REST API client |
screenshot_manager |
session | Screenshot utility |
The framework supports login once, use everywhere pattern:
# In conftest.py - Login is performed once per session
@pytest.fixture(scope="session")
def authenticated_context(browser, settings):
context = browser.new_context()
page = context.new_page()
# Login once
login_page = LoginPage(page)
login_page.navigate_to_login()
login_page.login_with_credentials()
page.close()
yield context
# Logout once at the end
logout_page = context.new_page()
HomePage(logout_page).logout()
context.close()
# Use in tests
def test_something(authenticated_page):
# Already logged in!
home_page = HomePage(authenticated_page)
home_page.do_something()Benefits:
- Login/logout executed only once per test session
- Faster test execution
- Shared authentication state across tests
The framework includes a robust API client for REST API testing.
from utils.api_client import APIClient
def test_api_example(api_client):
# GET request
response = api_client.get("/users")
assert response.status_code == 200
# POST request
response = api_client.post("/users", json={
"name": "John",
"email": "[email protected]"
})
assert response.status_code == 201
# PUT request
response = api_client.put("/users/1", json={"name": "Updated"})
# DELETE request
response = api_client.delete("/users/1")def test_authenticated_api(api_client):
# Set token
api_client.token = "your-jwt-token"
# Or set API key
api_client.set_api_key("your-api-key")
response = api_client.get("/protected-resource")You can attach tests to an already running browser for debugging. This saves time during development.
# macOS - Chrome
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
--remote-debugging-port=9222 \
--user-data-dir=/tmp/chrome-debug
# macOS - Chromium
/Applications/Chromium.app/Contents/MacOS/Chromium \
--remote-debugging-port=9222 \
--user-data-dir=/tmp/chromium-debug
# Linux - Chrome
google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-debug
# Windows - Chrome
"C:\Program Files\Google\Chrome\Application\chrome.exe" ^
--remote-debugging-port=9222 ^
--user-data-dir=C:\temp\chrome-debugOpen http://localhost:9222/json/version in another browser and copy the webSocketDebuggerUrl.
Add to your .env file:
CDP_ENDPOINT=ws://localhost:9222/devtools/browser/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxpytest tests/ui/test_login.py -vThe tests will now use your existing browser session!
Screenshots are automatically captured when tests fail. Configure in .env:
SCREENSHOT_ON_FAILURE=trueScreenshots are saved to screenshots/failures/ and attached to Allure reports.
def test_with_screenshot(page):
# In page object
my_page.take_screenshot("step_1")
# Full page screenshot
my_page.take_screenshot("full_page", full_page=True)
# Element screenshot
my_page.take_element_screenshot("element_name", "element_shot")HTML reports are generated automatically:
# Run tests (report generated automatically)
pytest
# Report location
open reports/report.htmlAllure provides beautiful, interactive reports.
# Install Homebrew if not installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Allure
brew install allure
# Verify installation
allure --version# Download and install
wget https://github.com/allure-framework/allure2/releases/download/2.24.0/allure-2.24.0.tgz
tar -xzf allure-2.24.0.tgz
sudo mv allure-2.24.0 /opt/allure
sudo ln -s /opt/allure/bin/allure /usr/local/bin/allure
# Verify
allure --version# Using Scoop
scoop install allure
# Or using Chocolatey
choco install allure
# Verify
allure --version# Run tests (generates allure-results)
pytest --alluredir=reports/allure-results
# Generate HTML report
allure generate reports/allure-results -o reports/allure-report --clean
# Open report in browser
allure open reports/allure-report
# Or serve directly
allure serve reports/allure-resultsConfigure email settings in .env:
SMTP_SERVER=smtp.gmail.com
SMTP_PORT=587
SMTP_USERNAME=[email protected]
SMTP_PASSWORD=your_app_password
EMAIL_RECIPIENTS=[email protected],[email protected]- Go to Google Account → Security
- Enable 2-Step Verification
- Go to App passwords
- Generate a new app password for "Mail"
- Use this password in
SMTP_PASSWORD
from utils.email_reporter import EmailReporter
reporter = EmailReporter()
reporter.send_report(
test_results={
"passed": 10,
"failed": 2,
"total": 12,
"duration": "5m 30s"
},
attachments=["reports/report.html"]
)Uncomment in conftest.py:
def pytest_sessionfinish(session, exitstatus):
reporter = EmailReporter()
reporter.send_report(
test_results={...},
attachments=["reports/report.html"]
)The workflow is configured for manual trigger only to prevent automatic runs.
-
Add Secrets in GitHub Repository:
- Go to Settings → Secrets and variables → Actions
- Add these secrets:
BASE_URL: Your application URLTEST_USERNAME: Test user usernameTEST_PASSWORD: Test user passwordAPI_KEY: API key (if needed)
-
Run Workflow Manually:
- Go to Actions tab
- Select "Test Automation" workflow
- Click "Run workflow"
- Select branch and options
- Click "Run workflow"
-
Enable Automatic Triggers (optional): Edit
.github/workflows/test.ymland uncomment:push: branches: [ main, develop ] pull_request: branches: [ main ]
- Go to Actions tab
- Click on the workflow run
- Download artifacts (test-results, allure-results)
- Jenkins with Pipeline plugin
- Python 3.11+ on Jenkins agent
- Allure Jenkins plugin (optional)
-
Create Pipeline Job:
- New Item → Pipeline
- Name: "Test Automation"
-
Configure Pipeline:
- Definition: Pipeline script from SCM
- SCM: Git
- Repository URL: Your repo URL
- Script Path:
Jenkinsfile
-
Add Credentials:
- Manage Jenkins → Credentials
- Add credentials:
TEST_USERNAME: Secret textTEST_PASSWORD: Secret textAPI_KEY: Secret text
-
Install Allure Plugin (optional):
- Manage Jenkins → Plugins
- Search and install "Allure"
- Configure Allure in Global Tool Configuration
-
Run Pipeline:
- Open the job
- Click "Build with Parameters"
- Select browser and test type
- Click "Build"
- Build page shows test results
- Allure Report link (if plugin installed)
- Download artifacts from build
-
Create Pipeline:
- Pipelines → New Pipeline
- Select your repository
- Choose "Existing Azure Pipelines YAML file"
- Select
azure-pipelines.yml
-
Configure Variables:
- Edit Pipeline → Variables
- Add variables:
BASE_URL: Your application URLTEST_USERNAME: (mark as secret)TEST_PASSWORD: (mark as secret)API_KEY: (mark as secret)
-
Run Pipeline:
- Click "Run pipeline"
- Select parameters (browser, test type)
- Click "Run"
- Pipeline run shows test results
- Download artifacts from pipeline
- View test results in "Tests" tab
-
Create CodeBuild Project:
- Go to AWS CodeBuild
- Create build project
- Source: Your repository (GitHub, CodeCommit)
- Environment:
- Managed image
- Ubuntu, Standard, aws/codebuild/standard:7.0
- Buildspec:
aws-codepipeline/buildspec.yml
-
Configure Environment Variables:
- In CodeBuild project settings
- Add environment variables:
BASE_URLTEST_USERNAMETEST_PASSWORDAPI_KEY
- Or use AWS Secrets Manager
-
Create CodePipeline (optional):
- Create pipeline
- Add Source stage
- Add Build stage with CodeBuild project
-
Run Build:
- Start build manually
- Or trigger via pipeline
- Build logs in CodeBuild
- Artifacts in S3 (if configured)
- Test reports in CodeBuild Reports
-
Use markers to categorize tests:
@pytest.mark.smoke @pytest.mark.ui def test_login(): pass
-
Use Allure decorators for better reports:
@allure.epic("Authentication") @allure.feature("Login") @allure.severity(allure.severity_level.CRITICAL) def test_login(): pass
-
Use steps for clarity:
with allure.step("Navigate to login page"): login_page.navigate()
-
Prefer stable selectors:
- Test IDs:
[data-testid='login-btn'] - IDs:
#login-button - Avoid: XPath with indexes, dynamic classes
- Test IDs:
-
Keep locators updated:
- Review locators when UI changes
- Use descriptive names
- Use session-scoped fixtures for expensive setup
- Run tests in parallel with
pytest-xdist - Use headless mode in CI/CD
- Attach to browser during development
This project uses pytest plugins (e.g. pytest-base-url, pytest-playwright) that already provide some CLI options such as --base-url and --headed.
If you add the same option again in conftest.py (via pytest_addoption), pytest may fail at startup with an error like:
ValueError: option names {'--base-url'} already added
argparse.ArgumentError: argument --headed: conflicting option string: --headed
Fix: remove/rename the duplicate options from conftest.py.
If you have multiple Python environments (venv + conda), run tests using:
python -m pytest ...This ensures pytest runs using the active Python environment.
If you accidentally paste terminal output into conftest.py, pytest will fail to import it. Keep conftest.py as valid Python code only.
# Reinstall browsers
playwright install --with-deps# Allow Playwright to access browsers
xattr -cr ~/.cache/ms-playwright# Increase timeout in .env
DEFAULT_TIMEOUT=60000# Ensure browser is running with correct port
# Check if port 9222 is in use
lsof -i :9222# Run with debug logging
DEBUG=true pytest -v
# Run with Playwright debug
PWDEBUG=1 pytest tests/ui/test_login.py- Fork the repository
- Create a feature branch
- Make your changes
- Run tests locally
- Submit a pull request
For issues and questions:
- Create an issue in the repository
- Check existing issues for solutions
Happy Testing! 🚀