Posts

Showing posts from January, 2022

Interface-based Programming with TLA+ (+Dependency Injection with auto-wiring)

 Implementing the algorithm with TLA+ can output interfaces in any languages together with tests agains this interface. Implementing these interfaces connects an implementation with the algorithm. The wiring of the interfaces via DI is generated by the model/algorithm. The database of implementation is provider by the implementer.

Smart-Test

Git Test Selection Use code-coverage to determine, which line of code is executed by which tests. Take the git diff of changed lines, take the set-union of the sets of tests per line and run them.  Flaky-Detection and Handling If a test fails, rerun it n times. If it passes k times, mark it as green but increase its flakiness-counter by some sensible amount. Report the statistics. Fail Fast Remember which tests failed if we touched a file. This creates a database. If we run the tests, we order the tests in a way, that the most "dangerous" run first. This should move the moment of failure nearer to typing of the line. React quickly Rerun the tests as soon as the file-system changed. If production code changed, calculate the line-diffs and execute the tests first, that use that line (like in Git Test Selection). Report now, that these passed and test some random n tests in the background. The goal is to finish faster than the eyes can focus the test-output window. Prepare the n...

docstrings done right

Docs should: Describe the algorithm Give usage-examples Give instructions on how to use function Use-Cases? Alternatives? Motivation? If you describe as much as possible via formal languages, tools can read them too and help you. It also removes ambiguities.  You could use: TLA+ to describe the algorithm Usage-examples via doctests in form of unit-tests Instructions on how to use the function via DbC: Pre-Conditions, Post-Conditions and Invariants (the blend into the algorithm-description of TLA+ The docstrings are parsed and a documentation is created while using the new power gained by the tools (maybe create a playground, etc.)

Diagram tests: Testing like checking matplotlib

 When plotting a graph with matplotlib, you define a x range as a list of values and the y range as a list of values. This is then plotted. These two lists are perfect parameterized test-cases and creating a test with many cases you just have to look at the plot which can be generated directly. 

Version everything

 Each dependency depends on a given version of a module. This implies that every module needs version, which adds the time dissension to the modules. This is very very poorly supported yet (only in a larger scale).

Algorithm: Implementations check each other

 An algorithm can have multiple implementations. Use Property-Based-Testing to verify them each other. Nice for performance tuning, etc (Versions required).

TLA+Property-Based Testing

The Algorithm and the Implementation are two different things. Test (Model-Check) the algorithm and use this model (the specification) to verify that the implementation really implements the algorithm via Property Based Testing which comes up by itself with test-cases. Interesting non-algorithm stuff is still tested via unit-tests.