Simon Willison’s Weblog

Subscribe
Atom feed for testing Random

87 posts tagged “testing”

2022

viewport-preview (via) I built a tiny tool which lets you preview a URL in a bunch of different common browser viewport widths, using iframes.

# 26th July 2022, 12 am / css, iframes, mobile, projects, testing

I discovered a while ago that all those errors and bugs that only appear when you demo something to an audience also magically appear when you record yourself demoing it to nobody. Maybe narrating a feature to a pretend audience takes the blinders off enough that you notice little mistakes you wouldn't have otherwise.

karaterobot

# 24th July 2022, 8:59 pm / testing

Running C unit tests with pytest (via) Brilliant, detailed tutorial by Gabriele Tornetta on testing C code using pytest, which also doubles up as a ctypes tutorial. There’s a lot of depth here—in addition to exercising C code through ctypes, Gabriele shows how to run each test in a separate process so that segmentation faults don’t fail the entire suite, then adds code to run the compiler as part of the pytest run, and then shows how to use gdb trickery to generate more useful stack traces.

# 12th February 2022, 5:14 pm / c, ctypes, testing, pytest

How I build a feature

I’m maintaining a lot of different projects at the moment. I thought it would be useful to describe the process I use for adding a new feature to one of them, using the new sqlite-utils create-database command as an example.

[... 2,850 words]

2021

PAGNIs: Probably Are Gonna Need Its

Luke Page has a great post up with his list of YAGNI exceptions.

[... 1,289 words]

I’m pretty convinced that the biggest single contributor to improved software in my lifetime wasn’t object-orientation or higher-level languages or functional programming or strong typing or MVC or anything else: It was the rise of testing culture.

Tim Bray

# 1st June 2021, 2:35 pm / testing, tim-bray

When you have to mock a collaborator, avoid using the Mock object directly. Either use mock.create_autospec() or mock.patch(autospec=True) if at all possible. Autospeccing from the real collaborator means that if the collaborator's interface changes, your tests will fail. Manually speccing or not speccing at all means that changes in the collaborator's interface will not break your tests that use the collaborator: you could have 100% test coverage and your library would fall over when used!

Thea Flowers

# 17th March 2021, 4:44 pm / mocking, testing, python

Blazing fast CI with pytest-split and GitHub Actions (via) pytest-split is a neat looking variant on the pattern of splitting up a test suite to run different parts of it in parallel on different machines. It involves maintaining a periodically updated JSON file in the repo recording the average runtime of different tests, to enable them to be more fairly divided among test runners. Includes a recipe for running as a matrix in GitHub Actions.

# 22nd February 2021, 7:06 pm / testing, pytest, github-actions

Litestream runs continuously on a test server with generated load and streams backups to S3. It uses physical replication so it'll actually restore the data from S3 periodically and compare the checksum byte-for-byte with the current database.

Ben Johnson

# 11th February 2021, 8:50 pm / testing, litestream, ben-johnson

2020

How to cheat at unit tests with pytest and Black

I’ve been making a lot of progress on Datasette Cloud this week. As an application that provides private hosted Datasette instances (initially targeted at data journalists and newsrooms) the majority of the code I’ve written deals with permissions: allowing people to form teams, invite team members, promote and demote team administrators and suchlike.

[... 933 words]

2019

parameterized. I love the @parametrize decorator in pytest, which lets you run the same test multiple times against multiple parameters. The only catch is that the decorator in pytest doesn’t work for old-style unittest TestCase tests, which means you can’t easily add it to test suites that were built using the older model. I just found out about parameterized which works with unittest tests whether or not you are running them using the pytest test runner.

# 19th February 2019, 9:05 pm / python, testing, pytest

2018

The interesting ideas in Datasette

Datasette (previously) is my open source tool for exploring and publishing structured data. There are a lot of ideas embedded in Datasette. I realized that I haven’t put many of them into writing.

[... 2,857 words]

Datasette unit tests: monkeytype_call_traces (via) Faceted browse against every function call that occurs during the execution of Datasette’s test suite. I used Instagram’s MonkeyType tool to generate this, which can run Python code and generates a SQLite database of all of the traced calls. It’s intended to be used to automatically add mypy annotations to your code, but since it produces a SQLite database as a by-product I’ve started exploring the intermediary format using Datasette. Generating this was as easy as running “monkeytype run `which pytest`” in the Datasette root directory.

# 2nd August 2018, 9:03 pm / python, sqlite, static-typing, testing, datasette, mypy

Documentation unit tests

Or: Test-driven documentation.

[... 1,521 words]

Hynek Schlawack: Testing & Packaging (via) “How to ensure that your tests run code that you think they are running, and how to measure your coverage over multiple tox runs (in parallel!)”—Hynek makes a convincing argument for putting your packaged Python code in a src/ directory for ease of testing and coverage.

# 22nd May 2018, 10:12 pm / packaging, python, testing, hynek-schlawack

2017

I’ve heard managers and teams mandating 100% code coverage for applications. That’s a really bad idea. The problem is that you get diminishing returns on our tests as the coverage increases much beyond 70% (I made that number up… no science there). Why is that? Well, when you strive for 100% all the time, you find yourself spending time testing things that really don’t need to be tested. Things that really have no logic in them at all (so any bugs could be caught by ESLint and Flow). Maintaining tests like this actually really slow you and your team down.

Kent C. Dodds

# 27th October 2017, 6:20 am / testing

How to set up world-class continuous deployment using free hosted tools

I’m going to describe a way to put together a world-class continuous deployment infrastructure for your side-project without spending any money.

[... 1,294 words]

Cypress (via) Promising looking new open source testing framework for full-blown web integration testing—a modern alternative to Selenium. I spent five minutes playing with the demo and was really impressed by it—especially their “time travel” feature which lets you hover over a passed test and see the state of the browser when each of those assertions was executed.

# 11th October 2017, 4:14 pm / selenium, testing, cypress

2011

One interesting quirk of Pinboard is a complete absence of unit tests. I used to be a die-hard believer in testing, but in Pinboard tried a different approach, as an experiment. Instead of writng tests I try to be extremely careful in coding, and keep the code size small so I continue to understand it. I've found my defect rate to be pretty comparable to earlier projects that included extensive test suites and fixtures, but I am much more productive on Pinboard.

Maciej Ceglowski

# 11th February 2011, 2:57 am / maciej-ceglowski, pinboard, testing, recovered

2010

Flask 0.1 Released. Armin’s Flask (a Python microframework built around Werkzeug and Jinja2) is looking pretty solid for a two week old project—extensive documentation, comprehensive unit test support (and example applications with unit tests) and some very tidy API design.

# 16th April 2010, 5:12 pm / armin-ronacher, flask, jinja, microframeworks, python, testing, werkzeug

Unit Testing Achievements. A plugin for Python’s nose test runner that adds achievements—“Night Shift: Make a failing suite pass between 12am and 5am.”

# 28th February 2010, 3:56 pm / nose, nosetest, python, testing

twitter-text-conformance (via) This is a neat idea: Twitter have released open source libraries for parsing standard tweet syntax in Ruby and Java, but they’ve also released a set of YAML unit tests aimed at anyone who wants to implement the same parsing logic in other languages.

# 6th February 2010, 3:39 pm / java, ruby, testing, twitter, yaml, conformance-suites

rlisagor’s freshen. A Python clone of Ruby’s innovative Cucumber testing framework. Tests are defined as a set of plain-text scenarios, which are then executed by being matched against test functions decorated with regular expressions. Has anyone used this or Cucumber? I’m intrigued but unconvinced—are the plain text scenarios really a useful way of defining tests?

# 5th January 2010, 7:30 pm / bdd, cucumber, freshen, python, ruby, testing

2009

There is no WebKit on Mobile. PPK ran 27 tests against 19 different WebKit-on-mobile implementations and found enormous disparities between the levels of support in currently available mobile phones.

# 7th October 2009, 12:23 pm / mobile, ppk, standards, testing, webkit

shunit2 (via) xUnit style testing for shell scripts.

# 27th September 2009, 7:34 pm / bash, shell, shunit2, testing, unix, xunit

Fabric factory. Promising looking continuous integration server written in Django, which uses Fabric scripts to define actions.

# 21st September 2009, 6:35 pm / continuous-integration, django, fabric, fabricfactory, python, testing

Test-Driven Heresy. Tim Bray advocates TDD for maintenance development, but argues that it may not be as useful during the exploratory, greenfield development phase of a project.

# 24th June 2009, 11:03 am / tdd, testing, tim-bray

Testing Django Views for Concurrency Issues. Neat decorator for executing a Django view under high concurrency in your unit tests, to help spot errors caused by database race conditions that should be executed inside a transaction.

# 27th May 2009, 10:01 am / concurrency, django, python, raceconditions, testing, threads

Nose 0.11 released. My favourite Python testing tool just got some really neat new features, including the ability to parallelize tests across multiple processes (hence CPUs) using the multiprocess module, Xunit XML output for integration with continuous integration tools and a --failed switch to re-run only the last batch of failed tests.

# 8th May 2009, 11:24 am / multiprocess, nose, python, testing, xunit

Right now, pypy compiled with JIT can run the whole CPython test suite without crashing, which means we're done with obvious bugs and the only ones waiting for us are really horrible.

Maciej Fijalkowski

# 1st May 2009, 3:04 pm / pypy, jit, python, jpython, bugs, testing