Pytest in Python – Detailed Explanation with Example
Pytest is a popular testing framework in Python that makes it easy to write simple and scalable test cases for various applications. It is widely used because of its simplicity, rich features, and strong community support.
Why Pytest?
- Easy to use with minimal boilerplate code.
- Supports fixtures (setup and teardown of resources).
- Provides powerful assertions.
- Can run unit tests, integration tests, and functional tests.
- Supports parameterized testing.
- Easily extendable through plugins.
Installation
You can install pytest using pip:
pip install pytest
Basic Example
Let's create a simple function and test it.
File: calculator.py
def add(x, y):
return x + y
def subtract(x, y):
return x - y
File: test_calculator.py
import pytest
from calculator import add, subtract
def test_add():
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
def test_subtract():
assert subtract(5, 3) == 2
assert subtract(3, 5) == -2
assert subtract(0, 0) == 0
Running the Tests
To run the tests, simply use the command:
pytest
Output:
=================== test session starts ===================
collected 2 items
test_calculator.py .. [100%]
=================== 2 passed in 0.12s ====================
Assertions in Pytest
Pytest uses standard Python assert statements for verification.
Example:
def test_assert():
assert 4 == 4
assert "hello" in "hello world"
assert isinstance(3.14, float)
Fixtures (Setup and Teardown)
Fixtures are used to set up some preconditions before the test runs and clean up afterward.
Example:
import pytest
@pytest.fixture
def sample_data():
return {"name": "Alice", "age": 25}
def test_sample_data(sample_data):
assert sample_data["name"] == "Alice"
assert sample_data["age"] == 25
Fixtures help avoid code duplication.
Parameterized Testing
You can use @pytest.mark.parametrize to run the same test with different data sets.
Example:
import pytest
from calculator import add
@pytest.mark.parametrize("x, y, result", [
(2, 3, 5),
(0, 0, 0),
(-1, 1, 0)
])
def test_add(x, y, result):
assert add(x, y) == result
Handling Exceptions
To test if a function raises an exception:
import pytest
def divide(x, y):
if y == 0:
raise ValueError("Cannot divide by zero")
return x / y
def test_divide():
with pytest.raises(ValueError):
divide(1, 0)
Skipping Tests
Sometimes, you may want to skip a test conditionally.
@pytest.mark.skip(reason="Skipping this test for now")
def test_skip():
assert 1 == 1
Running Only Specific Tests
To run a specific test, use:
pytest -k "test_add"
Pytest Coverage
To measure code coverage, install the pytest-cov plugin:
pip install pytest-cov
pytest --cov=calculator
More Advance topic - mock external APIs using Pytest
Mocking External APIs using Pytest
When testing applications that make API calls, it's not ideal to hit the real external service every time the tests run. Instead, mocking allows you to simulate API responses without making actual HTTP requests.
In Python, the unittest.mock library (part of the standard library) works seamlessly with Pytest to mock API calls.
Why Mock External APIs?
- Faster test execution.
- Avoid rate limits or usage costs from third-party APIs.
- Isolate code behavior without relying on external services.
- Simulate different API responses like success, failure, or timeouts.
Example: Mocking an API with Pytest
Let's say we have a function that fetches user data from a fake API.
File: api.py
import requests
def get_user_data(user_id):
url = f"https://jsonplaceholder.typicode.com/users/{user_id}"
response = requests.get(url)
if response.status_code == 200:
return response.json()
else:
raise Exception("API request failed")
Testing with Mocking
File: test_api.py
import pytest
import requests
from api import get_user_data
from unittest.mock import patch
def mock_api_success(*args, **kwargs):
class MockResponse:
status_code = 200
def json(self):
return {"id": 1, "name": "Alice", "email": "alice@example.com"}
return MockResponse()
@patch("requests.get", side_effect=mock_api_success)
def test_get_user_data_success(mock_get):
user_data = get_user_data(1)
assert user_data["name"] == "Alice"
assert user_data["email"] == "alice@example.com"
mock_get.assert_called_once_with("https://jsonplaceholder.typicode.com/users/1")
def mock_api_failure(*args, **kwargs):
class MockResponse:
status_code = 404
return MockResponse()
@patch("requests.get", side_effect=mock_api_failure)
def test_get_user_data_failure(mock_get):
with pytest.raises(Exception) as exc:
get_user_data(999)
assert str(exc.value) == "API request failed"
mock_get.assert_called_once_with("https://jsonplaceholder.typicode.com/users/999")
Explanation
@patch("requests.get"): Replaces therequests.getmethod with the mock function during the test.- MockResponse Class: Simulates the API response with a
status_codeandjson()method. - mock_get.assert_called_once_with(): Confirms that the API was called exactly once with the expected URL.
- side_effect: Allows the mock function to be dynamically assigned.
Testing Timeout or Exception
You can also simulate exceptions like timeouts.
@patch("requests.get", side_effect=requests.exceptions.Timeout)
def test_api_timeout(mock_get):
with pytest.raises(requests.exceptions.Timeout):
get_user_data(1)
mock_get.assert_called_once()
Using Fixtures for Mocks
You can make your mock reusable across multiple tests by creating a fixture.
@pytest.fixture
def mock_api(mocker):
return mocker.patch("requests.get", side_effect=mock_api_success)
def test_with_fixture(mock_api):
data = get_user_data(1)
assert data["name"] == "Alice"
Install pytest-mock Plugin
For an easier mocking experience, you can install pytest-mock:
pip install pytest-mock
Mock Async API Calls with Pytest
Mocking async API calls in Pytest can be done using pytest-asyncio along with unittest.mock or pytest-mock.
Why Mock Async APIs?
- Simulate different API responses without actual API calls.
- Test async code behavior like success, failure, and timeouts.
- Speed up tests without waiting for actual network delays.
Installation
Install the required libraries:
pip install pytest pytest-asyncio pytest-mock aiohttp
Example: Mocking Async API with Pytest
Example API Function
Let's assume we're using aiohttp for async HTTP requests.
File: async_api.py
import aiohttp
async def fetch_user_data(user_id):
url = f"https://jsonplaceholder.typicode.com/users/{user_id}"
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
return await response.json()
else:
raise Exception("API request failed")
Testing with Async Mocking
File: test_async_api.py
import pytest
from async_api import fetch_user_data
from unittest.mock import AsyncMock, patch
@pytest.mark.asyncio
@patch("aiohttp.ClientSession.get")
async def test_fetch_user_data_success(mock_get):
mock_response = AsyncMock()
mock_response.status = 200
mock_response.json.return_value = {"id": 1, "name": "Alice", "email": "alice@example.com"}
mock_get.return_value.__aenter__.return_value = mock_response
user_data = await fetch_user_data(1)
assert user_data["name"] == "Alice"
assert user_data["email"] == "alice@example.com"
mock_get.assert_called_once()
@pytest.mark.asyncio
@patch("aiohttp.ClientSession.get")
async def test_fetch_user_data_failure(mock_get):
mock_response = AsyncMock()
mock_response.status = 404
mock_get.return_value.__aenter__.return_value = mock_response
with pytest.raises(Exception) as exc:
await fetch_user_data(999)
assert str(exc.value) == "API request failed"
mock_get.assert_called_once()
Explanation
AsyncMock: Mocks async functions that return coroutines.__aenter__and__aexit__: Simulate the async context manager used byaiohttp.ClientSession().pytest.mark.asyncio: Marks the test function as async-compatible.mock_response.json.return_value: Simulates JSON response data.
Testing Timeout Exceptions
You can also simulate timeouts or other exceptions.
from aiohttp import ClientError
@pytest.mark.asyncio
@patch("aiohttp.ClientSession.get", side_effect=ClientError)
async def test_fetch_user_data_timeout(mock_get):
with pytest.raises(ClientError):
await fetch_user_data(1)
mock_get.assert_called_once()
Using Fixtures for Async Mocks
You can create reusable async mock fixtures.
@pytest.fixture
def mock_api(mocker):
mock_response = AsyncMock()
mock_response.status = 200
mock_response.json.return_value = {"id": 1, "name": "Alice"}
return mocker.patch("aiohttp.ClientSession.get", return_value=mock_response)
@pytest.mark.asyncio
async def test_with_fixture(mock_api):
data = await fetch_user_data(1)
assert data["name"] == "Alice"
Integrating Pytest with CI/CD Pipelines
Integrating Pytest with CI/CD pipelines helps automate the testing process, ensuring that your code is always tested before deployment. This setup improves code quality, catches bugs early, and enables continuous delivery.
Overview of CI/CD Pipeline
A typical CI/CD pipeline consists of:
- Continuous Integration (CI): Automatically run tests whenever code is pushed to the repository.
- Continuous Delivery (CD): Automatically deploy tested code to staging or production environments.
Popular CI/CD platforms include:
- GitHub Actions
- GitLab CI/CD
- Jenkins
- CircleCI
- Travis CI
Example: Using GitHub Actions with Pytest
We'll set up Pytest to run automatically on every push or pull request.
Project Structure
my_project/
├── api.py # Code to test
├── test_api.py # Pytest tests
├── requirements.txt # Dependencies
└── .github/
└── workflows/
└── pytest.yml # CI/CD configuration
Step 1: Create Requirements File
List your dependencies in requirements.txt:
pytest
pytest-asyncio
aiohttp
Step 2: GitHub Actions Configuration
Create the CI/CD pipeline file at:
.github/workflows/pytest.yml
pytest.yml
name: Run Pytest
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run Tests
run: pytest --maxfail=1 --disable-warnings
Step 3: Push Code to GitHub
Commit your code and push it to the main branch:
git add .
git commit -m "Set up CI with Pytest"
git push origin main
Step 4: View Test Results
Go to your GitHub repository:
- Navigate to the Actions tab.
- Click on the latest workflow run.
- View the logs to see whether tests passed or failed.
Integrating Coverage Reports
If you want to measure code coverage in the pipeline, install pytest-cov:
pip install pytest-cov
Modify the Run Tests step:
- name: Run Tests with Coverage
run: pytest --cov=api --cov-report=xml
You can later upload the coverage report using GitHub's built-in services or third-party tools like Codecov.
Using GitLab CI/CD
If you're using GitLab, create .gitlab-ci.yml:
stages:
- test
pytest:
image: python:3.10
stage: test
script:
- pip install -r requirements.txt
- pytest --maxfail=1 --disable-warnings
only:
- main
Using Jenkins
- Install Pytest in your virtual environment.
- Create a Jenkins job.
- Configure the Build Step:
- Use
Execute Shell. - Add the following commands:
- Use
pip install -r requirements.txt
pytest --maxfail=1 --disable-warnings
- Use JUnit Plugin to display test reports.
Conclusion
Pytest is a flexible and powerful framework that simplifies the process of writing tests in Python. It provides an easy-to-use API, supports fixtures, and allows parameterized tests. Whether you're writing small unit tests or complex functional tests, Pytest is a great choice.
Mocking external APIs in Pytest is essential for reliable and isolated tests. With patch() from unittest.mock or pytest-mock, you can simulate various API scenarios without making real HTTP requests.
Mocking async API calls in Pytest helps isolate external dependencies and test various API response scenarios without relying on actual network requests. Use pytest-asyncio with AsyncMock or pytest-mock to handle async functions.
Integrating Pytest with CI/CD pipelines automates testing, improves software quality, and speeds up the delivery process. Whether you're using GitHub Actions, GitLab, or Jenkins, the setup is straightforward and highly customizable.
Comments
Post a Comment