Overview of this module

Note: This is probably not the best way to get coverage - but I'm leaving this content in case it's useful

Note: Until the next nbdev release you need to use an editable install, as this module uses new functions like split_flags_and_code.

Feel free to use pytest etc but to follow these examples, you'll just need coverage.

This notebook creates testcoverage.py which is not tied to the decision_tree project (so you can just download testcoverage.py if you like)

Running testcoverage.py will:

  • create a new folder in your nbdev project [lib_path]_test
  • delete all test scripts in [lib_path]_test
  • write a test script to [lib_path]_test for each notebook in [nbs_path]
    • and a run_all.py to run all test scripts in one go

To run create a test coverage report:

  • cd to nbs_path of the project you want to test
  • create test scripts with python [full path ...]/testcoverage.py
  • coverage run --source=[lib_path] [lib_path]_test/run_all.py
  • coverage report

Creating a test coverage report for fastai2 in my env looks like:

cd /home/peter/github/pete88b/fastai2/nbs

python /home/peter/github/pete88b/decision_tree/decision_tree/testcoverage.py

coverage run --source=/home/peter/github/pete88b/fastai2/fastai2 /home/peter/github/pete88b/fastai2/fastai2_test/run_all.py

coverage report

Note: this ↑ fails very quickly as fastai2 tests use things that are not available in plain python.

What next?

  • see if running tests in plain python is useful
    • it might be true that the tests of some/most projects don't need any ipython
  • make artifacts (like images/mnist3.png) available to the test scripts
    • so you don't have to be in the nbs folder to run tests
  • see if we can get coverage when running tests with ipython
  • see if there is a nice way to separate plain python tests and ipython tests?

Details details ...

I chose to "import" the module being tested (rather than write all code cells to the test script) so that:

  • we are testing the library created by nbdev
    • because the things we deliver are .py files, I can't help thinking that these are what we should be testing
  • we could use the test scripts to test a pip installed version of the library
    • i.e. we are testing the result of the full build, package and delivery process

write_imports[source]

write_imports(test_file, exports)

write import statements to the test script for all modules exported to by the nb we're converting

the test scipt will import everything returned by dir(module) because we need the test code to run as if it's in the module we're testing

write_test_cell_callback[source]

write_test_cell_callback(i, cell, export, code)

Return the code to be written to the test script or None to not write anything for cell

write_test_cells[source]

write_test_cells(test_file, nb, exports)

Writes the source of code cells to the test script

notebook2testscript[source]

notebook2testscript()

Convert notebooks to test scripts

PB notes

  • convert all notebooks that don't start with _
    • import default_export of notebook
    • import things exported to other modules
    • handle nbdev test flags - TODO
    • create test_[notebook name].py
    • write code of test cells to test_[notebook name].py
      • exclude show_doc, notebook2script, cmd calls etc TODO
coverage run --source=/home/peter/github/pete88b/decision_tree/decision_tree /home/peter/github/pete88b/decision_tree/decision_tree_test/test_00_core.py

coverage run --source=/home/peter/github/pete88b/decision_tree/decision_tree /home/peter/github/pete88b/decision_tree/decision_tree_test/run_all.py
cd /home/peter/github/pete88b/decision_tree

python /home/peter/github/pete88b/decision_tree/decision_tree/testcoverage.py

coverage run --source=/home/peter/github/pete88b/decision_tree/decision_tree /home/peter/github/pete88b/decision_tree/decision_tree_test/run_all.py

coverage report