One of the last things I did last year about testing is make them independent one from the other, now each test runs as an exec command, that ensure the tests are properly self-executed (and as such self-contained).
Another small goodies I added is that when the series of tests is completed (successfully or not) the travis-ci script grabs the content of the error log and outputs it to the console.
At the moment it doesn't raise any error, but from time to time it is useful, for example:
https://travis-ci.org/elkarte/Elkarte/jobs/17197799#L445
the first error is "normal" (it's generated while testing the registration), but the other two are not, they are two legit bugs I introduced while changing something else. ;D
In my plans there is to create a test to check if the log is empty (a general one to be run at the end of everything). There is a bit to play with though, because in the log there may be legit errors. One option is in each test to cleanup any "expected" error that may popup, for example in the current case, part of the tearDown method in TestRegistration could be to fetch the error and remove it. At that point, what is left is clearly a bug and could be reported as a "build breaker".
/me wipes sweat from brow that you did not use one of my broken PR's as the example :D
All the Travis work is really very very cool, has saved us a bunch of errors already!
Teh
@Spuds is shy and didn't tell people that he added another toy to the collection: https://scrutinizer-ci.com/
In particular: https://scrutinizer-ci.com/g/elkarte/Elkarte/
What does scrutinizer do?
It checks the code looking for broken things, for example methods or functions not existing, methods or functions not used, bad documentation, bad coding style, and many other things.
It runs on each pull request, so before merging there is another test that will help ElkArte to be less broken and so allow easier development and faster release cycle! :D
Its been a fun toy to play with 8) and its found several legit bugs in the code that we have fixed.
There are lots of the mundane documentation issues, style issues as well, but the real joy is that it does find bugs and unused code that would be difficult to find otherwise. It does take looking at each area that gets highlighted and determining why that area got flagged, some are obvious, some not so much, but in the end its creating a better product.
One thing that I just discovered would greatly benefit from automated testing is the moderation area. Mainly because it's something not that used.
But, in order to be able to test the basis there are several things to do, because there are tons of variants, so the easiest are:
- Create a user <= one test: use the internal functions to create one or mimicking the registration process and then do the test querying directly the db to check the expected values are there
- Create a topic (with a post obviously) <= second test: again use createPost or mimic the Post page and verify everything is inserted correctly into the databse
- Report that post <= and check the report is there
- Open/view the report
- Dismiss it
- Close it
And this is just about reporting, but the moderation area is also about unapproved posts (and then there are posts posted by members without permission and posts that are unapproved that need to be inserted and tested for both approval and deletion), and members, and emails, etc...
Well, this need is now tracked, sooner or later someone will create a test. ;D