Testing
Testing Strategy
The cilpy library is developed with a strong emphasis on correctness and
reliability, which is maintained through a comprehensive suite of unit tests.
Our testing strategy ensures that every component of the library is
independently verifiable and that new contributions can be integrated with
confidence.
How Testing Works
Our testing is built on a few core principles and tools:
- Framework: We use the
pytestframework for writing and running tests.pytest's test discovery mechanism automatically finds and runs all tests located in files namedtest_*.py. - Isolation and Mocking: To test a component's logic without interference
from its dependencies, we heavily utilize mocking via Python's
unittest.mocklibrary. For example, when testing aSolver, theProblemit is trying to solve is replaced with a "mock" object. This allows us to control the exact fitness values returned for any given solution, enabling us to create predictable scenarios and verify that the solver's internal logic (e.g., selection, elitism) behaves as expected. - Generic Design: A key feature of the test suite is its generic and
extensible nature. For components like
ProblemsandConstraint Handlers, we usepytest.mark.parametrizeto define a single set of tests that automatically runs against every new implementation. This means that when a developer adds a new benchmark function, they only need to add it to a list in the test file to have it fully validated against the library's interface contract.
Test Coverage
Our unit tests are organized to mirror the library's structure, providing coverage for each of its core components:
-
Problems (
cilpy.problem)- Initialization: Verifies that all problems are instantiated with the
correct
dimension,bounds, andname. - Evaluation: Confirms that the
evaluatemethod returns anEvaluationobject with the correct structure and data types for fitness and constraints. - Dynamic Behavior: For dynamic problems like the Moving Peaks Benchmark (MPB and CMPB), tests confirm that the landscape changes are triggered correctly based on the evaluation count.
- Initialization: Verifies that all problems are instantiated with the
correct
-
Constraint Handling Mechanisms (
cilpy.solver.chm)- Initialization: Ensures that handlers are created with valid
parameters (e.g.,
alphainAlphaConstraintHandlermust be in the range). - Comparison Logic: Rigorously tests the
is_bettermethod for all possible comparison scenarios. For example, theAlphaConstraintHandlertests cover cases where both solutions are feasible, only one is feasible, or their satisfaction levels are equal. - Internal Calculations: Validates helper functions, such as the
_calculate_satisfactionmethod in theAlphaConstraintHandler, against their formal definitions.
- Initialization: Ensures that handlers are created with valid
parameters (e.g.,
-
Solvers (
cilpy.solver)- Initialization: Checks that solvers correctly set up their initial state and generate a valid initial population.
- Algorithmic Operators: Each core mechanism of a solver is tested in
isolation using mocked problems. This includes:
- GA: Tournament selection, single-point crossover, Gaussian mutation, and elitism.
- RIGA: The replacement of the worst individuals with random immigrants.
- HyperMGA: The state-switching logic that toggles between normal and hyper-mutation modes based on environmental changes.
- Result Retrieval: Verifies that the
get_resultmethod correctly identifies and returns the best solution from the current population based on the activecomparator.
To run the entire test suite, simply execute the following command from the root directory of the project:
pytest