
Using pytest.raises to Validate Exceptions Like a Pro
Negative Tests are useful too!
As a QA Engineer or automation enthusiast, writing tests that validate correct behavior is only half the battle. The other half? Making sure the app handles wrong behavior gracefully. That's where negative testing comes in - and pytest.raises
is your secret weapon.
In this post, we'll explore how pytest.raises
lets you assert exceptions are raised without failing the test. This is perfect for validating edge cases, bad input, or failed operations.
What is pytest.raises
?
In Pytest, if your code raises an exception during a test, the test normally fails - as it should. But what if you're expecting the exception? That's where pytest.raises
comes in.
It wraps a block of code and passes the test only if the specified exception is raised. If it's not raised, the test fails.
Why Use pytest.raises
?
- Makes negative testing clean and readable
- Helps document edge-case handling
- Prevents false positives in error conditions
- Encourages testing of robust, defensive code
A Real-World Example
Let's say we're testing a simple division function that raises a ZeroDivisionError
when the denominator is zero.
def safe_divide(x, y):
return x / y
Now for the test:
import pytest
def test_safe_divide_zero_division():
with pytest.raises(ZeroDivisionError):
safe_divide(10, 0)
This test will pass if safe_divide(10, 0)
throws ZeroDivisionError
. If it doesn't (for example, if the code silently returns None
), the test fails - telling us something's broken.
Accessing the Exception
You can even inspect the exception message or attributes:
def test_value_error_with_message():
with pytest.raises(ValueError) as excinfo:
int("hello") # not a valid integer
assert "invalid literal" in str(excinfo.value)
This is powerful when you want to verify the type and details of the exception.
Clean Up with pytest.raises
Before pytest.raises
, Python developers would clutter tests with try/except blocks and fail manually. Compare:
Old way:
def test_safe_divide_old():
try:
safe_divide(10, 0)
assert False, "Expected ZeroDivisionError"
except ZeroDivisionError:
pass
Pytest way:
def test_safe_divide_pytest():
with pytest.raises(ZeroDivisionError):
safe_divide(10, 0)
Much cleaner, right?
Use Case Ideas for pytest.raises
- Invalid API parameters (
TypeError
,ValueError
) - Database connection failures (
ConnectionError
) - File not found or permission issues (
IOError
,PermissionError
) - Custom business rule exceptions
Final Thought
In automation testing, you should never be afraid of exceptions - you should expect them when the input is bad. pytest.raises
gives you the confidence to write bold, bulletproof test cases that ensure your code handles errors on purpose - not by accident.
Have a favorite exception handling trick or a real bug you caught using pytest.raises
? Share it in the comments below.
Level Up Your Pytest WebDriver Game
Essential Options for SQA Engineers
Why WebDriver Options Matter
WebDriver options allow you to customize the behavior of your browser instance, enabling you to optimize performance, handle specific scenarios, and mitigate common testing challenges. By strategically applying these options, you can create more robust, stable, and efficient automated tests.
1. Headless Mode with GPU Disabled: Speed and Stability Combined
Running tests in headless mode-without a visible browser window-is a game-changer for speed and resource efficiency. However, GPU-related issues can sometimes lead to crashes. The solution? Disable the GPU while running headless.
--headless=new
: Activates the newer, more efficient headless mode.--disable-gpu
: Prevents GPU-related crashes, ensuring test stability.
This combination provides a significant performance boost and enhances the reliability of your tests, especially in CI/CD environments.
2. Evading Detection: Disabling DevTools and Automation Flags
Websites are increasingly sophisticated in detecting automated browsers. To minimize the risk of your tests being flagged, disable DevTools and automation-related flags.
--disable-blink-features=AutomationControlled
: Prevents thenavigator.webdriver
property from being set totrue
.excludeSwitches
,enable-automation
: Removes the "Chrome is being controlled by automated test software" infobar.useAutomationExtension
,False
: Disables the automation extension.
3. Ignoring Certificate Errors: Simplifying HTTPS Testing
When testing HTTPS websites with self-signed or invalid certificates, certificate errors can disrupt your tests. The --ignore-certificate-errors
option allows you to bypass these errors.
This option is invaluable for testing development or staging environments where certificate issues are common. However, remember to avoid using this in production tests, as it can mask real security vulnerabilities.
4. Disabling Extensions and Popup Blocking: Minimizing Interference
Browser extensions and pop-up blockers can interfere with your tests, leading to unpredictable behavior. Disabling them ensures a clean and consistent testing environment.
--disable-extensions
: Prevents extensions from loading, reducing potential conflicts.--disable-popup-blocking
: Stops pop-ups from appearing, simplifying test interactions.
Integrating with Pytest Fixtures
To streamline your Pytest setup, encapsulate your WebDriver options within a fixture.
This fixture sets up a Chrome browser with your desired options and makes it available to your test functions.
Conclusion
Mastering WebDriver options is essential for SQA engineers seeking to optimize their Pytest automation workflows. By leveraging these options, you can create faster, more stable, and reliable tests, ultimately improving the overall quality and efficiency of your testing efforts. Experiment with these options and discover how they can enhance your testing practices.
Capturing Screenshots in Fixture Teardown
Cool Trick with Teardown
Pytest has solidified its position as a go-to testing framework for Python developers due to its simplicity, extensibility, and powerful features. In this blog post, we'll dive deep into using Pytest, specifically focusing on its integration with Playwright for browser automation, and explore how to capture screenshots during fixture teardown for enhanced debugging and result analysis.
Capturing Screenshots in Fixture Teardown
To capture a screenshot before the browser closes, we can modify the page fixture to include a teardown phase. This will help make debugging a bit easier and a chance to look at automation to see if there's any weirdness.
Any code in the Fixture that appears after "yield page" will run at the conclusion of the test.
import pytest
from playwright.sync_api import sync_playwright
import os
@pytest.fixture
def page(request):
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
yield page
def fin():
screenshot_path = f"screenshots/{request.node.name}.png"
os.makedirs(os.path.dirname(screenshot_path), exist_ok=True)
page.screenshot(path=screenshot_path)
browser.close()
request.addfinalizer(fin)
def test_example_with_screenshot(page):
page.goto("https://www.cryan.com")
assert "cryan.com" in page.title()
def test_example_fail(page):
page.goto("https://www.cryan.com")
assert "Wrong Title" in page.title()
After running the tests, you'll find screenshots in the screenshots directory. These screenshots will help you understand the state of the browser at the end of each test, especially during failures.
Benefits of Screenshot Capture
Debugging: Quickly identify issues by visually inspecting the browser state. Reporting: Include screenshots in test reports for better documentation. Visual Validation: Verify UI elements and layout.
PermalinkParametrization in Pytest
Use the same code over and over
Parametrization in Pytest allows you to run the same test function multiple times with different inputs. Instead of writing separate test functions for each set of data, you can define a single test and provide various argument sets using the @pytest.mark.parametrize decorator. This approach is especially useful for testing functions that need to handle a variety of inputs, edge cases, or data types.
Why Use Parametrization?
- Code Reusability: Write one test function and reuse it for multiple test cases.
- Efficiency: Reduce boilerplate code and make your test suite easier to maintain.
- Clarity: Clearly define the inputs and expected outputs for each test case.
- Comprehensive Testing: Easily test a wide range of scenarios without extra effort.
Code Example
This code will check to see if various internal sites are up and running. I ran similar code in the past. This was done so that I could see if there are any issues before the morning standup.
If I didn't use parametrization here, there would be multiple test cases which could cause overhead issues if changes needed to be done.
import pytest
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.common.exceptions import WebDriverException
# List of websites to test
WEBSITES = [
"https://www.company.com",
"https://qa1.company.com",
"https://qa2.company.com",
"https://stage.company.com"
]
@pytest.fixture
def chrome_driver():
"""Fixture to set up and tear down Chrome WebDriver"""
# Set up Chrome options
chrome_options = Options()
chrome_options.add_argument("--headless") # Run in headless mode
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
# Initialize driver
driver = webdriver.Chrome(options=chrome_options)
driver.set_page_load_timeout(30) # Set timeout to 30 seconds
yield driver
# Teardown
driver.quit()
@pytest.mark.parametrize("website", WEBSITES)
def test_website_is_up(chrome_driver, website):
"""
Test if a website loads successfully by checking:
1. Page loads without timeout
2. HTTP status is 200 (implicitly checked via successful load)
3. Page title is not empty
"""
try:
# Attempt to load the website
chrome_driver.get(website)
# Check if page title exists and is not empty
title = chrome_driver.title
assert title, f"Website {website} loaded but has no title"
# Optional: Check if body element exists
body = chrome_driver.find_element(By.TAG_NAME, "body")
assert body is not None, f"Website {website} has no body content"
print(f"? {website} is up and running (Title: {title})")
except WebDriverException as e:
pytest.fail(f"Website {website} failed to load: {str(e)}")
except AssertionError as e:
pytest.fail(f"Website {website} loaded but content check failed: {str(e)}")
if __name__ == "__main__":
pytest.main(["-v"])
Permalink
mocker.spy
Learn how to use Pytest's mocker.spy for robust website testing
Testing interactions with external services or complex internal functions can be tricky. You want to ensure your website behaves correctly without relying on the actual implementation, which might be slow, unreliable, or have side effects. That's where pytest-mock's spy comes in!
What is mocker.spy?
mocker.spy lets you wrap any callable (function, method, etc.) and record its calls. You can then assert how many times it was called, what arguments it received, and what values it returned. This is incredibly useful for verifying interactions without actually mocking the underlying implementation.
Why is it cool for website testing?
Imagine you have a website that:
- Logs user activity to an external analytics service.
- Sends emails for password resets.
- Interacts with a third-party API for data retrieval.
Instead of actually sending data to analytics, sending real emails, or hitting the live API, you can use mocker.spy to verify that these interactions occurred as expected.
A Practical Example: Tracking Analytics Events
Let's say your website has a function that logs user interactions to an analytics service:
# website/analytics.py
import requests
def track_event(user_id, event_name, event_data):
try:
requests.post("https://analytics.example.com/track", json={
"user_id": user_id,
"event_name": event_name,
"event_data": event_data,
})
except requests.exceptions.RequestException as e:
print(f"Error tracking event: {e}")
And your website's view function calls this:
# website/views.py
from website.analytics import track_event
def process_user_action(user_id, action_data):
# ... process user action ...
track_event(user_id, "user_action", action_data)
# ... more logic ...
Here's how you can test it with mocker.spy:
# tests/test_views.py
from website.views import process_user_action
from website.analytics import track_event
def test_process_user_action_tracks_event(mocker):
spy = mocker.spy(track_event)
user_id = 123
action_data = {"item_id": 456}
process_user_action(user_id, action_data)
spy.assert_called_once_with(user_id, "user_action", action_data)
By incorporating mocker.spy into your website testing strategy, you can create robust and reliable tests that give you confidence in your website's functionality. Happy testing!
PermalinkNaming Screenshots Dynamically in Pytest
Automate Screenshot Naming in Pytest: Easily Identify Test Failures
When running UI tests, capturing screenshots can be an invaluable debugging tool. However, managing these screenshots can quickly become chaotic if they are not properly labeled. One effective way to make screenshots easier to organize and track is by incorporating the test name into the filename. This ensures that each screenshot can be traced back to the exact test that generated it.
Capturing the Current Test Name in Pytest
Pytest provides an environment variable called PYTEST_CURRENT_TEST
, which contains information about the currently executing test. We can extract the test name from this variable and use it to generate meaningful screenshot filenames.
Here's an example of how to do this in a Selenium-based test:
import os
import time
from datetime import datetime
def test_full_page_screenshot_adv(browser):
browser.set_window_size(1315, 2330)
browser.get("https://www.cryan.com") # Navigate to the test page
# Extract the current test name
mytestname = os.environ.get('PYTEST_CURRENT_TEST').split(':')[-1].split(' ')[0]
# Create a timestamp for unique filenames
log_date = datetime.now().strftime('%Y-%m-%d-%H-%M')
# Define the screenshot path
screenshot_path = f"{mytestname}-{log_date}.png"
# Capture and save the screenshot
browser.save_screenshot(screenshot_path)
print(f"Screenshot saved as {screenshot_path}")
How It Works
- Retrieve the Current Test Name:
- The environment variable
PYTEST_CURRENT_TEST
holds information about the currently running test.
- Using
.split(':')[-1]
, we extract the actual test name from the full test path.
- Further splitting by spaces (
split(' ')[0]
) ensures we only get the function name.
- The environment variable
- Generate a Timestamp:
- The
datetime.now().strftime('%Y-%m-%d-%H-%M')
function creates a timestamp in the formatYYYY-MM-DD-HH-MM
to ensure unique filenames.
- The
- Save the Screenshot:
- The test name and timestamp are combined to form a filename.
- The screenshot is saved using Selenium's
save_screenshot()
method.
- The test name and timestamp are combined to form a filename.
Why This Matters
- Easier Debugging: Knowing which test generated a screenshot makes debugging test failures much simpler.
- Organized Test Artifacts: Each screenshot is uniquely named, reducing the chances of overwriting files.
- Automated Report Integration: The structured filenames can be linked to test reports, making them more informative.
Final Thoughts
By incorporating the test name into the screenshot filename, you can quickly identify which test generated a particular screenshot. This small tweak can save time when reviewing test results, especially in large automation suites.
Try implementing this in your test framework and see how much easier it becomes to manage your UI test screenshots!
PermalinkPyTest Install
Alternative way to install PyTest
The popular pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.
Python is a good language to learn. According to the TIOBE and PYPL index, Python is the top programing language with C and Java closely behind.
If your application is Python base, QA Engineers may find that writing automation in Python may lead to a better understanding of the code logic - which in turn will result into better testing.
If your installing PyTest on your local MacBook Pro at work, you may run into issues with permissions. The local IT department may have your computer locked down and some of the installs will require Administrator permissions.
Here are the install instructions if you have limited rights.
PyTest User Install
Use Python3 -m pip to install PyTest and Selenium:
python3 -m pip install pytest --user
python3 -m pip install selenium --user
Sample Test File
Here's a simple code sample to validate the install worked:
#!/usr/bin/env /usr/bin/python3 # Verify the Pytest was installed import pytest from selenium import webdriver import sys import requests from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from time import sleep def test_google(): global chrome_driver chrome_driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) chrome_driver.get('https://www.google.com/') title = "Google" assert title == chrome_driver.title pageSource = chrome_driver.page_source gateway = "Carbon neutral since 2007" if not gateway in pageSource: pytest.fail( "Google Changed their Webpage") chrome_driver.close()
Save this file on your computer, I would recommend saving it in ~/pytest/google.py
Execute Test
To execute this test, open up Terminal:
cd ~/pytest/ pytest --no-header -v google.pyPermalink
About
Welcome to Pytest Tips and Tricks, your go-to resource for mastering the art of testing with Pytest! Whether you're a seasoned developer or just dipping your toes into the world of Python testing, this blog is designed to help you unlock the full potential of Pytest - one of the most powerful and flexible testing frameworks out there. Here, I'll share a treasure trove of practical insights, clever techniques, and time-saving shortcuts that I've gathered from years of writing tests and debugging code.
Check out all the blog posts.
Blog Schedule
Thursday 10 | PlayWright |
Friday 11 | Macintosh |
Saturday 12 | Internet Tools |
Sunday 13 | Misc |
Monday 14 | Media |
Tuesday 15 | QA |
Wednesday 16 | Pytest |