Extending The Suite
Most new coverage in mtv-api-tests follows the same recipe:
- Add a scenario to
tests/tests_config/config.py. - Point a class at that scenario with
class_plan_config. - Keep the standard five-step migration flow.
- Reuse the shared fixtures for setup, cleanup, and validation.
- Extend provider or validation helpers only when the existing abstractions stop being enough.
Note: This suite is intentionally class-based. In most cases, adding a new test means adding one config entry and one new class, not building a brand-new setup stack.
Add A Test Config
pytest.ini wires pytest-testconfig to tests/tests_config/config.py, so new scenarios start there.
A minimal cold-migration entry can be very small:
"test_sanity_cold_mtv_migration": {
"virtual_machines": [
{"name": "mtv-tests-rhel8", "guest_agent": True},
],
"warm_migration": False,
},
When you need more coverage, keep the same shape and add the keys the suite already understands:
"test_cold_migration_comprehensive": {
"virtual_machines": [
{
"name": "mtv-win2019-3disks",
"source_vm_power": "off",
"guest_agent": True,
},
],
"warm_migration": False,
"target_power_state": "on",
"preserve_static_ips": True,
"pvc_name_template": "{{.VmName}}-disk-{{.DiskIndex}}",
"pvc_name_template_use_generate_name": False,
"target_node_selector": {
"mtv-comprehensive-node": None,
},
"target_labels": {
"mtv-comprehensive-label": None,
"test-type": "comprehensive",
},
"target_affinity": {
"podAffinity": {
"preferredDuringSchedulingIgnoredDuringExecution": [
{
"podAffinityTerm": {
"labelSelector": {"matchLabels": {"app": "test"}},
"topologyKey": "kubernetes.io/hostname",
},
"weight": 50,
}
]
}
},
"vm_target_namespace": "mtv-comprehensive-vms",
"multus_namespace": "default", # Cross-namespace NAD access
},
A few patterns are worth knowing up front:
virtual_machinesis always the center of the scenario.warm_migrationcontrols whether the flow is warm or cold.- VM-level keys such as
source_vm_power,guest_agent,clone,disk_type,add_disks,snapshots, andclone_nameare already used by existing tests. - Plan-level keys such as
target_power_state,preserve_static_ips,pvc_name_template,vm_target_namespace,target_node_selector,target_labels,target_affinity,pre_hook,post_hook, andcopyoffloadare already supported by the shared helpers.
Tip: In
target_node_selectorandtarget_labels, a value ofNonedoes not mean “missing”. The fixtures replace it with the currentsession_uuid, which makes it easy to create unique labels safely.Note: The runtime plan is not the raw config entry.
prepared_plandeep-copies the config, clones VMs when needed, updates VM names, creates hooks, and stores extra source VM metadata. In test methods, always work fromprepared_plan, not the literal values fromconfig.py.
Follow The Five-Step Class Pattern
The standard migration classes all use the same shape. The cold sanity test is the simplest example:
@pytest.mark.tier0
@pytest.mark.incremental
@pytest.mark.parametrize(
"class_plan_config",
[
pytest.param(
py_config["tests_params"]["test_sanity_cold_mtv_migration"],
)
],
indirect=True,
ids=["rhel8"],
)
@pytest.mark.usefixtures("cleanup_migrated_vms")
class TestSanityColdMtvMigration:
"""Cold migration test - sanity check."""
storage_map: StorageMap
network_map: NetworkMap
plan_resource: Plan
From there, the class follows the same five steps every time:
test_create_storagemap()builds theStorageMapwithget_storage_migration_map().test_create_networkmap()builds theNetworkMapwithget_network_migration_map().test_create_plan()populates VM IDs and creates the MTVPlanwithcreate_plan_resource().test_migrate_vms()starts the migration withexecute_migration().test_check_vms()validates the result withcheck_vms().
That pattern is consistent across:
tests/test_mtv_cold_migration.pytests/test_mtv_warm_migration.pytests/test_cold_migration_comprehensive.pytests/test_warm_migration_comprehensive.pytests/test_copyoffload_migration.pytests/test_post_hook_retain_failed_vm.py
The shared state also stays consistent: classes store storage_map, network_map, and plan_resource on the class itself so later steps can reuse them.
Warning: Keep
@pytest.mark.incrementalon these classes. The steps depend on each other, and the suite is written to stop later steps cleanly when an earlier one fails.
When choosing markers, reuse the ones already declared in pytest.ini:
tier0for core migration coveragewarmfor warm migration coverageremotefor remote-cluster destination coveragecopyoffloadfor XCOPY/copy-offload coverageincrementalfor dependent class flows
Warm classes also use precopy_interval_forkliftcontroller, and remote-destination classes switch from destination_provider to destination_ocp_provider.
Reuse Fixtures Instead Of Rebuilding Setup
Most of the hard work is already in conftest.py and the utility modules. Reuse that layer first.
prepared_planis the main runtime plan fixture. It deep-copies the class config, prepares cloned VMs, tracks source VM metadata insource_vms_data, creates hooks when configured, and sets_vm_target_namespace.target_namespacecreates a unique namespace for migration resources and stores it for cleanup.source_provideranddestination_providergive you provider objects instead of raw credentials.source_provider_inventorygives you the Forklift inventory view that the mapping helpers use.multus_network_nameautomatically creates as many NetworkAttachmentDefinitions as the source VMs need and returns the base name and namespace thatget_network_migration_map()expects.cleanup_migrated_vmsdeletes migrated VMs after the class finishes and automatically uses the custom VM namespace if your plan setsvm_target_namespace.precopy_interval_forkliftcontrollerpatches theForkliftControllerfor warm-migration snapshot timing, so warm tests should keep using it rather than patching the controller themselves.labeled_worker_nodeandtarget_vm_labelsare the fixtures to use when your config includestarget_node_selectorortarget_labels.vm_ssh_connectionsgives post-migration validation a reusable SSH connection manager.copyoffload_config,copyoffload_storage_secret,setup_copyoffload_ssh, andmixed_datastore_configare the copy-offload-specific fixtures already used by the XCOPY tests.prepared_plan_1andprepared_plan_2split a multi-VM plan into two independent plans for simultaneous migration coverage.
If you need to create an extra OpenShift resource for a new scenario, use create_and_store_resource() instead of deploying it directly. That helper generates a safe name when needed, deploys the resource, and registers it in the fixture store for teardown.
Tip:
target_namespaceandvm_target_namespaceare different things.target_namespaceis where the migration resources live.vm_target_namespaceis an optional plan setting that tells MTV to place the migrated VMs in a different namespace.
Extend Provider Coverage
Most test classes are already provider-neutral because they work through source_provider, destination_provider, and source_provider_inventory. In practice, extending provider coverage usually means keeping the same five-step class and passing a few extra provider-specific arguments.
The copy-offload tests are a good example. They still use get_storage_migration_map(), but add provider-specific storage plugin data instead of rewriting the whole flow:
offload_plugin_config = {
"vsphereXcopyConfig": {
"secretRef": copyoffload_storage_secret.name,
"storageVendorProduct": storage_vendor_product,
}
}
self.__class__.storage_map = get_storage_migration_map(
fixture_store=fixture_store,
target_namespace=target_namespace,
source_provider=source_provider,
destination_provider=destination_provider,
ocp_admin_client=ocp_admin_client,
source_provider_inventory=source_provider_inventory,
vms=vms_names,
storage_class=storage_class,
datastore_id=datastore_id,
offload_plugin_config=offload_plugin_config,
access_mode="ReadWriteOnce",
volume_mode="Block",
)
That is the pattern to follow when you want to add provider-specific behavior:
- Keep the class structure the same.
- Keep using the shared map and plan helpers.
- Add only the extra provider inputs the helper already supports.
A few existing provider-specific patterns are already in the suite:
- Warm migration tests gate unsupported source providers at module level with
pytest.mark.skipif(...). - Remote destination tests use
destination_ocp_providerand skip whenremote_ocp_clusteris not configured. - Copy-offload tests layer extra fixtures on top of the standard class flow rather than creating a separate framework.
Adding A New Provider Backend
If you need a brand-new provider type, there are two places where the provider/inventory pairing is wired together. One of them is source_provider_inventory in conftest.py:
providers = {
Provider.ProviderType.OVA: OvaForkliftInventory,
Provider.ProviderType.RHV: OvirtForkliftInventory,
Provider.ProviderType.VSPHERE: VsphereForkliftInventory,
Provider.ProviderType.OPENSHIFT: OpenshiftForkliftInventory,
Provider.ProviderType.OPENSTACK: OpenstackForliftinventory,
}
A new provider type needs all of the following:
- A concrete
BaseProviderimplementation underlibs/providers/. - A matching
ForkliftInventoryimplementation inlibs/forklift_inventory.py. - Registration in
utilities/utils.py:create_source_provider()so the fixture layer can construct the provider from.providers.json. - Registration in
conftest.py:source_provider_inventory()so the mapping helpers know how to query storage and network data. - A
vm_dict()implementation that fills the fields the validators already expect, including CPU, memory, NICs, disks, power state, and any provider-specific metadata your checks need.
The active source provider is selected from .providers.json through load_source_providers(), so provider coverage should usually be added by configuration first. Only add a new provider implementation when the suite genuinely needs a new backend, not just a new scenario.
Extend Validation Coverage
For most new test scenarios, the best place to add coverage is utilities/post_migration.py, not the test_check_vms() method itself.
check_vms() is the central post-migration validator. It already covers:
- power state
- CPU and memory
- network mapping
- storage mapping
- PVC naming templates
- snapshots
- serial preservation
- guest agent state
- SSH connectivity
- static IP preservation
- node placement
- VM labels
- VM affinity
- RHV-specific power-off behavior
The existing label, node-placement, and affinity checks show the pattern clearly:
if plan.get("target_node_selector") and labeled_worker_node:
try:
check_vm_node_placement(
destination_vm=destination_vm,
expected_node=labeled_worker_node["node_name"],
)
except Exception as exp:
res[vm_name].append(f"check_vm_node_placement - {str(exp)}")
if plan.get("target_labels") and target_vm_labels:
try:
check_vm_labels(
destination_vm=destination_vm,
expected_labels=target_vm_labels["vm_labels"],
)
except Exception as exp:
res[vm_name].append(f"check_vm_labels - {str(exp)}")
if plan.get("target_affinity"):
try:
check_vm_affinity(
destination_vm=destination_vm,
expected_affinity=plan["target_affinity"],
)
except Exception as exp:
res[vm_name].append(f"check_vm_affinity - {str(exp)}")
When you want to add a new validation, the usual path is:
- Add a plan key to
tests/tests_config/config.pyif the validation is scenario-driven. - Collect any setup-time data in
prepared_planor a dedicated fixture. - Pass any plan-level MTV fields through
create_plan_resource()if the validation depends on plan configuration. - Add a focused helper such as
check_vm_labels()orcheck_pvc_names()toutilities/post_migration.py. - Call that helper from
check_vms()behind anif plan.get("your_key"):guard.
This keeps the test classes simple. The class still ends with check_vms(), and the validation logic stays in one place.
Tip: Negative-path tests should still keep the five-step flow.
tests/test_post_hook_retain_failed_vm.pyshows the pattern: wrapexecute_migration()inpytest.raises(MigrationPlanExecError)when failure is expected, then decide whethercheck_vms()should still run based on where the failure happened.
Validate And Collect Your New Tests
The repository does not include a checked-in GitHub Actions or GitLab pipeline file. The validation path that is checked into the repo is visible in pytest.ini, tox.toml, Dockerfile, and .pre-commit-config.yaml.
tox.toml already defines the first validation pass for new tests:
[env.pytest-check]
commands = [
[
"uv",
"run",
"pytest",
"--setup-plan",
],
[
"uv",
"run",
"pytest",
"--collect-only",
],
]
That leads to a practical workflow for new suite extensions:
- Run
uv run pytest --collect-onlyfirst. It is also the defaultCMDin theDockerfile, which makes test discovery a first-class check in this repo. - Run
uv run pytest --setup-planortox -e pytest-checkto catch setup and collection issues before trying a full migration run. - Run
pre-commit run --all-filesbefore you send changes out. The repo’s hooks includeflake8,ruff,ruff-format,mypy,detect-secrets,gitleaks, andmarkdownlint-cli2. - Keep using the existing markers unless you truly need a new one.
Warning:
pytest.inienables--strict-markers. If you introduce a new marker and do not add it topytest.ini, collection will fail.Tip: Start with collection and setup validation before a live run. This suite depends on real clusters, real providers, and real credentials, so the fastest feedback loop is usually
--collect-only,--setup-plan, and pre-commit.