All statistics used in our paper derived from the NCQ dataset (about packages and code snippets) should be reproducible in the tests in tests/info. This allows us to easily measure changes in our tool, and also improve our paper reproducibility.
As we are adding code snippet error reporting to NCQ I believe it is a good idea to make sure these tests are maintained for v2.0.
Additionally, output from the tests should be logged in some easily understandable format, such that someone only needs to rereun the tests to see that the data in our paper is correct. At the moment output is simply printed to the console, but this is not as nice as for example, a JSON or csv file.
All statistics used in our paper derived from the NCQ dataset (about packages and code snippets) should be reproducible in the tests in
tests/info. This allows us to easily measure changes in our tool, and also improve our paper reproducibility.As we are adding code snippet error reporting to NCQ I believe it is a good idea to make sure these tests are maintained for v2.0.
Additionally, output from the tests should be logged in some easily understandable format, such that someone only needs to rereun the tests to see that the data in our paper is correct. At the moment output is simply printed to the console, but this is not as nice as for example, a JSON or csv file.