Daniel Perrett
2012-04-24 18:09:19 UTC
Is there any way to 'tag' tests in Perl?
What I mean is, that ideally, if you tagged all your tests according
to the functionality they depended on, you could use the tags to more
easily work out what was going wrong.
# Looks like you failed 21 tests of 75.
# Failures by tag:
# syntax: 0/50
# lwp: 21/25
# http: 21/21
# search: 20/20
# unicode: 5/8
This report is for some hypothetical module which imports some syntax
and allows the user to run some searches over the web via an LWP
object.
Here, we can see at a glance that there is nothing wrong with the
syntactic code, and the code which requires LWP itself is probably
fine because the problem only comes when the user tries to make http
requests.
It looks likely that even though all the search tests fail, they are
failing because there is no working connection, as tested by the first
http request. Although five of the unicode tests are failing, three
aren't (throwing unicode characters at the syntax).
Assuming there isn't any way to 'tag' tests, like this, should there
be? The problems I can see are:
- How to make it easy to do syntactially, avoiding lots of repetition
- Whether this requires a complete rewrite of the assumptions of the
core test handling modules
- The effort involved for test authors in tagging tests in this detail
- Some test failures happen something unexpected went wrong, and
tagging is only useful if we know what categories are useful.
- It could encourage authors to be lazy
But the advantages are
- If there are common problems (computer can't access the net, unicode
handling is dodgy), this makes it more straightforward to diagnose
than reading through logs of very long test scripts with lots of
failure diagnostics
- Might be useful for coverage checking (you could write something
asking if you have a test which has a particular combination of tags)
- You can write more clever algorithms to try to pinpoint where the
problem is (e.g. which combinations of tags always fail)
- Makes TDD easier because you can write lots of tests which will fail
(and you know you can ignore because they depend on code that isn't
written yet), and you can still focus on the features you're writing
right now.
(I guess one answer could be 'write them in separate test scripts' but
what I want is tags (many-to-many) rather than categories
(many-to-one), and more files is a bit cumbersome.)
Daniel
What I mean is, that ideally, if you tagged all your tests according
to the functionality they depended on, you could use the tags to more
easily work out what was going wrong.
# Looks like you failed 21 tests of 75.
# Failures by tag:
# syntax: 0/50
# lwp: 21/25
# http: 21/21
# search: 20/20
# unicode: 5/8
This report is for some hypothetical module which imports some syntax
and allows the user to run some searches over the web via an LWP
object.
Here, we can see at a glance that there is nothing wrong with the
syntactic code, and the code which requires LWP itself is probably
fine because the problem only comes when the user tries to make http
requests.
It looks likely that even though all the search tests fail, they are
failing because there is no working connection, as tested by the first
http request. Although five of the unicode tests are failing, three
aren't (throwing unicode characters at the syntax).
Assuming there isn't any way to 'tag' tests, like this, should there
be? The problems I can see are:
- How to make it easy to do syntactially, avoiding lots of repetition
- Whether this requires a complete rewrite of the assumptions of the
core test handling modules
- The effort involved for test authors in tagging tests in this detail
- Some test failures happen something unexpected went wrong, and
tagging is only useful if we know what categories are useful.
- It could encourage authors to be lazy
But the advantages are
- If there are common problems (computer can't access the net, unicode
handling is dodgy), this makes it more straightforward to diagnose
than reading through logs of very long test scripts with lots of
failure diagnostics
- Might be useful for coverage checking (you could write something
asking if you have a test which has a particular combination of tags)
- You can write more clever algorithms to try to pinpoint where the
problem is (e.g. which combinations of tags always fail)
- Makes TDD easier because you can write lots of tests which will fail
(and you know you can ignore because they depend on code that isn't
written yet), and you can still focus on the features you're writing
right now.
(I guess one answer could be 'write them in separate test scripts' but
what I want is tags (many-to-many) rather than categories
(many-to-one), and more files is a bit cumbersome.)
Daniel