Guide 11. No snapshots

What you're going to learn

In this guide you're going to learn about the tests without the hypervisor snapshots in Testo Framework. With this kind of tests you can save a lot of disk space.

Preconditions

  1. Testo Framework is installed.
  2. Hyper-V is installed.
  3. Ubuntu server 16.04 image is downloaded and located here: C:\iso\ubuntu_server.iso. The location may be different, but in this case the ISO_DIR command-line param has to be adjusted accordingly.
  4. Testo guest additions iso image is downloaded and located in the same folder as Ubuntu Server 16.04 iso-image.
  5. The Host has the Internet access.
  6. (Recommended) Testo-lang syntax highlight for Sublime Text 3 is set up.
  7. (Recommended) Guide 10 is complete.

Introduction

As you could've noticed, the tests caching plays a huge role in Testo Framework. It saves you a lot of time by using results of the already successfully run tests (if their cache is valid, of course), thus avoiding unnecessary test runs. This feature is possible thanks to the hypervisor ability to take and restore snapshots of virtual machines and flash drives.

But this approach has a downside as well: every snapshot takes a lot of disk space, and you could run out of this space pretty fast. The situation gets worse when you consider the fact that at the end of the test all virtual entities get their own snapshot. For example, if a test involves 5 virtual machines and 2 flash dirves, then you'll get 5 virtual machine snapshots and 2 flash drive snapshots.

And so, to save you some disk space, there is a feature in Testo-lang that gives you the opportunity to create tests without the hypervisor snapshots, with only light-weight metadata files. With this feature used properly you'll save a ton of disk space without any significant damage to the test runs, and that is the topic of the today' guide.

What to begin with?

To take a really good look at the tests with no hardware snapshots feature, first we need to make some preparations. Namely, we're going to split the test_ping test from the previous guide in two: test_ping_1 and test_ping_2.

test test_ping_1: client_setup_nic, server_setup_nic {
    client exec bash "ping 192.168.1.2 -c5"
}

test test_ping_2: client_setup_nic, server_setup_nic {
    server exec bash "ping 192.168.1.1 -c5"
}

Run all the tests again and make sure they are cached:

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes
UP-TO-DATE TESTS:
server_install_ubuntu
server_prepare
server_install_guest_additions
server_setup_nic
client_install_ubuntu
client_prepare
client_install_guest_additions
client_setup_nic
TESTS TO RUN:
test_ping_1
test_ping_2
[ 80%] Preparing the environment for test test_ping_1
[ 80%] Restoring snapshot server_setup_nic for virtual machine server
[ 80%] Restoring snapshot client_setup_nic for virtual machine client
[ 80%] Running test test_ping_1
[ 80%] Executing bash command in virtual machine client with timeout 10m
+ ping 192.168.1.2 -c5
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.016 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.030 ms
64
bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.033 ms
64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=0.087 ms

--- 192.168.1.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4075ms
rtt min/avg/max/mdev = 0.016/0.040/0.087/
0.024 ms
[ 80%] Taking snapshot test_ping_1 for virtual machine client
[ 80%] Taking snapshot test_ping_1 for virtual machine server
[ 90%] Test test_ping_1 PASSED in 0h:0m:14s
[ 90%] Preparing the environment for test test_ping_2
[ 90%] Restoring snapshot server_setup_nic for virtual machine server
[ 90%] Restoring snapshot client_setup_nic for virtual machine client
[ 90%] Running test test_ping_2
[ 90%] Executing bash command in virtual machine server with timeout 10m
+ ping 192.168.1.1 -c5
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.033 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.033 ms
64
bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.033 ms
64 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=0.035 ms

--- 192.168.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4095ms
rtt min/avg/max/mdev = 0.017/0.030/0.035/
0.007 ms
[ 90%] Taking snapshot test_ping_2 for virtual machine client
[ 90%] Taking snapshot test_ping_2 for virtual machine server
[100%] Test test_ping_2 PASSED in 0h:0m:14s
PROCESSED TOTAL 10 TESTS IN 0h:0m:28s
UP-TO-DATE: 8
RUN SUCCESSFULLY: 2
FAILED: 0
C:\Users\Testo>

And now let's take a look at the tests hierarchy we've got to this point:

We have 10 tests in total, and at the end of each test snapshots are created. We already consume a huge amount of disk space as it is. Of course, we want to fix this issue.

Let's figure out why do we even need snapshots at the end of each successful test. Mostly - so the Testo can restore those snapshots of virtual machines and flash drives when it is necessary to run the children tests. For example, if the test_ping_1 test had lost its cache, Testo Framework would've required the snapshots from the server_setup_nic and client_setup_nic tests just to run the requested test.

But to think about it, why do we even need the snapshots at the end of the test_ping_1 and test_ping_2 tests? These tests are the leaves in our tests tree and there's just no need to restore the virtual test bench state from the end of these tests. So, therefore, we may just tell Testo not to create hypervisor snapshots at the end of them (please make sure that all your tests are passed and cached before making the changes):

[no_snapshots: true]
test test_ping_1: client_setup_nic, server_setup_nic {
    client exec bash "ping 192.168.1.2 -c5"
}

[no_snapshots: true]
test test_ping_2: client_setup_nic, server_setup_nic {
    server exec bash "ping 192.168.1.1 -c5"
}

We just used a new Testo-lang feature: tests attributes. At the moment there're only two available test attributes: no_snaphots and description. The description attribute is not so much interesting - it allows you to create a human-readable test description, which may be stored in the tests report (if you tell Testo to create such a report with the --report_folder command-line argument). But the no_snapshots attribute is more meaningful, and we're going to set its value to true.

Let's run the script:

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes
UP-TO-DATE TESTS:
server_install_ubuntu
server_prepare
server_install_guest_additions
server_setup_nic
client_install_ubuntu
client_prepare
client_install_guest_additions
client_setup_nic
TESTS TO RUN:
test_ping_1
test_ping_2
[ 80%] Preparing the environment for test test_ping_1
[ 80%] Restoring snapshot server_setup_nic for virtual machine server
[ 80%] Restoring snapshot client_setup_nic for virtual machine client
[ 80%] Running test test_ping_1
[ 80%] Executing bash command in virtual machine client with timeout 10m
+ ping 192.168.1.2 -c5
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.016 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.030 ms
64
bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.033 ms
64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=0.087 ms

--- 192.168.1.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4075ms
rtt min/avg/max/mdev = 0.016/0.040/0.087/
0.024 ms
[ 80%] Taking snapshot test_ping_1 for virtual machine client
[ 80%] Taking snapshot test_ping_1 for virtual machine server
[ 90%] Test test_ping_1 PASSED in 0h:0m:14s
[ 90%] Preparing the environment for test test_ping_2
[ 90%] Restoring snapshot server_setup_nic for virtual machine server
[ 90%] Restoring snapshot client_setup_nic for virtual machine client
[ 90%] Running test test_ping_2
[ 90%] Executing bash command in virtual machine server with timeout 10m
+ ping 192.168.1.1 -c5
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.033 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.033 ms
64
bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.033 ms
64 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=0.035 ms

--- 192.168.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4095ms
rtt min/avg/max/mdev = 0.017/0.030/0.035/
0.007 ms
[ 90%] Taking snapshot test_ping_2 for virtual machine client
[ 90%] Taking snapshot test_ping_2 for virtual machine server
[100%] Test test_ping_2 PASSED in 0h:0m:14s
PROCESSED TOTAL 10 TESTS IN 0h:0m:28s
UP-TO-DATE: 8
RUN SUCCESSFULLY: 2
FAILED: 0
C:\Users\Testo>

We can see that both of our modified tests had lost their cache and was run again. The reason is that test attributes are included in tests checksums.

But what's now? Now the hypervisor snapshots hadn't been created at the end of the test, so we could've assumed that the tests wouldn't going to be cached again, and they would be running all the time, right? Wrong! Let's run the tests again:

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes
UP-TO-DATE TESTS:
server_install_ubuntu
server_prepare
server_install_guest_additions
server_setup_nic
client_install_ubuntu
client_prepare
client_install_guest_additions
client_setup_nic
test_ping_1
test_ping_2
PROCESSED TOTAL 10 TESTS IN 0h:0m:0s
UP-TO-DATE: 10
RUN SUCCESSFULLY: 0
FAILED: 0
C:\Users\Testo>

So what do we see? All the tests remained cached and nothing had been run! And that's with two of our tests missing the hypervisor snapshots (which you could see for yourself in the Hyper-V manager)

Why does this happen? Let's sort this out.

The thing is, there are two types of snapshots in Testo Framework. Both types work independently:

  1. Metadata snapshots. These are essentially small text files created by Testo Framework at the end of each test. You can't do anything with them. The files contain the various information about the tests helping Testo validate the cache. If you take a real close look at the last terminal output we'd got when run the no_snapshots tests, you'd still see the Taking snapshot... message - this actually implies metadata snapshots.
  2. Hypervisor snapshots. These are the snapshots we're all familiar with. This kind of snapshots are created only if there is no no_snapshots attribute specified for the test (or its value is false, which is the default value). Since we'd turned this attribute on, the hypervisor snapshots weren't created.

We can sum everything up with an important conclusion:

The no_snapshots attribute doesn't affect the test caching. A test with this attrubute is cached like any other. The attribute doesn't mean that the test is going to be run every time.

Turns out, we've saved up a little disk space and lost absolutely nothing, since the test_ping_1 and exchange_files_with_flash_1 snapshots aren't of any use for us. This gives us another important conclusion:

You can put the no_snapshots attribute in all the "leaf" tests (tests with no children) with literally no damage at all, since you're not going to restore your test bench into those states anyway.

no_snapshots in the intermediate tests

You might've got the impression that if the no_snapshots saves up the disk space and doesn't affect the tests caching, then, maybe, it should be put into each and every test? That impression would've been wrong.

Yes, this attribute doesn't affect the caching, but it doesn't mean there is no negative side effects. Let's demonstrate these effects and add this attribute to the client_install_guest_additions test:

[no_snapshots: true]
test client_install_guest_additions: client_prepare {
    client install_guest_additions()
}

Now let's run this test and nothing more.

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes --test_spec client_install_guest_additions
UP-TO-DATE TESTS:
client_install_ubuntu
client_prepare
TESTS TO RUN:
client_install_guest_additions
[ 67%] Preparing the environment for test client_install_guest_additions
[ 67%] Restoring snapshot client_prepare for virtual machine client
[ 67%] Running test client_install_guest_additions
[ 67%] Calling macro install_guest_additions() in virtual machine client
[ 67%] Plugging dvd C:/iso/testo-guest-additions-hyperv.iso into virtual machine client
[ 67%] Typing "mount /dev/cdrom /media" with interval 30ms in virtual machine client
[ 67%] Pressing key ENTER in virtual machine client
[ 67%] Waiting "mounting read-only" for 1m with interval 1s in virtual machine client
[ 67%] Calling macro exec_bash_command(command="dpkg -i /media/testo-guest-additions.deb", time_to_wait="1m") in virtual machine client
[ 67%] Typing "clear && dpkg -i /media/testo-guest-additions.deb && echo Result is $?" with interval 30ms in virtual machine client
[ 67%] Pressing key ENTER in virtual machine client
[ 67%] Waiting "Result is 0" for 1m with interval 1s in virtual machine client
[ 67%] Calling macro exec_bash_command(command="umount /media", time_to_wait="1m") in virtual machine client
[ 67%] Typing "clear && umount /media && echo Result is $?" with interval 30ms in virtual machine client
[ 67%] Pressing key ENTER in virtual machine client
[ 67%] Waiting "Result is 0" for 1m with interval 1s in virtual machine client
[ 67%] Sleeping in virtual machine client for 2s
[ 67%] Unplugging dvd from virtual machine client
[ 67%] Taking snapshot client_install_guest_additions for virtual machine client
[100%] Test client_install_guest_additions PASSED in 0h:0m:19s
PROCESSED TOTAL 3 TESTS IN 0h:0m:19s
UP-TO-DATE: 2
RUN SUCCESSFULLY: 1
FAILED: 0
C:\Users\Testo>

Let's also make sure that the test is cached, despite the no_snapshots: true attribute:

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes --test_spec client_install_guest_additions
UP-TO-DATE TESTS:
client_install_ubuntu
client_prepare
client_install_guest_additions
PROCESSED TOTAL 3 TESTS IN 0h:0m:0s
UP-TO-DATE: 3
RUN SUCCESSFULLY: 0
FAILED: 0
C:\Users\Testo>

And now run the test client_setup_nic, which depends on the client_install_guest_additions test:

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes --test_spec client_setup_nic
UP-TO-DATE TESTS:
UP-TO-DATE TESTS:
client_install_ubuntu
client_prepare
client_install_guest_additions
TESTS TO RUN:
client_install_guest_additions
client_setup_nic
[ 60%] Preparing the environment for test client_install_guest_additions
[ 60%] Restoring snapshot client_prepare for virtual machine client
[ 60%] Running test client_install_guest_additions
[ 60%] Calling macro install_guest_additions() in virtual machine client
[ 60%] Plugging dvd C:/iso/testo-guest-additions-hyperv.iso into virtual machine client
[ 60%] Typing "mount /dev/cdrom /media" with interval 30ms in virtual machine client
[ 60%] Pressing key ENTER in virtual machine client
[ 60%] Waiting "mounting read-only" for 1m with interval 1s in virtual machine client
[ 60%] Calling macro exec_bash_command(command="dpkg -i /media/testo-guest-additions.deb", time_to_wait="1m") in virtual machine client
[ 60%] Typing "clear && dpkg -i /media/testo-guest-additions.deb && echo Result is $?" with interval 30ms in virtual machine client
[ 60%] Pressing key ENTER in virtual machine client
[ 60%] Waiting "Result is 0" for 1m with interval 1s in virtual machine client
[ 60%] Calling macro exec_bash_command(command="umount /media", time_to_wait="1m") in virtual machine client
[ 60%] Typing "clear && umount /media && echo Result is $?" with interval 30ms in virtual machine client
[ 60%] Pressing key ENTER in virtual machine client
[ 60%] Waiting "Result is 0" for 1m with interval 1s in virtual machine client
[ 60%] Sleeping in virtual machine client for 2s
[ 60%] Unplugging dvd from virtual machine client
[ 80%] Test client_install_guest_additions PASSED in 0h:0m:18s
[ 80%] Preparing the environment for test client_setup_nic
[ 80%] Running test client_setup_nic
[ 80%] Copying C:/Users/Alex/testo-tutorials/hyperv/11 - no_snapshots/./rename_net.sh to virtual machine client to destination /opt/rename_net.sh with timeout 10m
[ 80%] Executing bash command in virtual machine client with timeout 10m
+ chmod +x /opt/rename_net.sh
+ /opt/rename_net.sh 52:54:00:00:00:aa server_side
Renaming success
+ ip a a 192.168.1.2/24 dev server_side
+ ip l s server_side up
+ ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen
1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,
LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:01:25:1f brd ff:ff:ff:ff:ff:ff
inet 192.168.154.93/28 brd 192.168.154.95 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:fe01
:251f/64 scope link
valid_lft forever preferred_lft forever
3: server_side: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 52:54:00:00:00:aa brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 scope gl
obal server_side
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe00:aa/64 scope link tentative
valid_lft forever preferred_lft forever
[ 80%] Taking snapshot client_setup_nic for virtual machine client
[100%] Test client_setup_nic PASSED in 0h:0m:2s
PROCESSED TOTAL 4 TESTS IN 0h:0m:20s
UP-TO-DATE: 3
RUN SUCCESSFULLY: 1
FAILED: 0
C:\Users\Testo>

We can see a very peculiar thing: the client_install_guest_additions test had been marked both as UP-TO-DATE and as TEST TO RUN. Let's sort this out.

When Testo Framework scans the tests tree trying to figure out which tests are supposed to be run and which are cached, each test is evaluated individually. Since we want to run the client_setup_nic test, then first all of its parents' cache is probed. This is done for client_install_ubuntu, client_prepare and client_install_guest_additions. All these tests have the valid cache, so they are marked as UP-TO-DATE, which we can see in the output.

Then comes the time to check the cache for the client_setup_nic test itself. The cache is invalid (because we'd earlier changed the client_install_guest_additions parent-test) and the test must be re-run. But how can we run it?

If the client_install_guest_additions test hadn't been marked with the no_snapshots attribute, we could've restored the virtual machine states as they were at the end of the client_install_guest_additions test. But this test doesn't have the hypervisor snapshots, so we have nowhere to restore the virtual machines into. This raises the question: "How to revert the client machine into the state it was at the end of the client_install_guest_additions test?" Well, to do so, Testo Framework searches the tests tree trying to find a test with the hypervisor snapshots turned on, so it can play the part of the "starting point". In our case, the client_prepare test is going to be selected.

Testo restores the client machine into the client_prepare state and it begins to re-run the client_install_guest_additions test just to restore the client machine into the client_install_guest_additions state. And that's why we can see the client_install_guest_additions in the TESTS TO RUN queue.

When the client machine is in the correct state, we can, finally, run the client_setup_nic test itself. The whole process may be visualized as this:

If the client_setup_nic also had the no_snapshots attribute, the resulting test plan to run the client_setup_nic test would've looked like this: client_setup_nic->client_install_guest_additions->client_setup_nic.

And now let's try to run all the tests at once:

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes
UP-TO-DATE TESTS:
server_install_ubuntu
server_prepare
server_install_guest_additions
server_setup_nic
client_install_ubuntu
client_prepare
client_install_guest_additions
client_setup_nic
TESTS TO RUN:
test_ping_1
test_ping_2
[ 80%] Preparing the environment for test test_ping_1
[ 80%] Restoring snapshot client_setup_nic for virtual machine client
[ 80%] Restoring snapshot server_setup_nic for virtual machine server
[ 80%] Running test test_ping_1
[ 80%] Executing bash command in virtual machine client with timeout 10m
+ ping 192.168.1.2 -c5
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.030 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.031 ms
64
bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.031 ms
64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=0.030 ms

--- 192.168.1.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4081ms
rtt min/avg/max/mdev = 0.017/0.027/0.031/
0.008 ms
[ 80%] Taking snapshot test_ping_1 for virtual machine client
[ 80%] Taking snapshot test_ping_1 for virtual machine server
[ 90%] Test test_ping_1 PASSED in 0h:0m:8s
[ 90%] Preparing the environment for test test_ping_2
[ 90%] Restoring snapshot client_setup_nic for virtual machine client
[ 90%] Restoring snapshot server_setup_nic for virtual machine server
[ 90%] Running test test_ping_2
[ 90%] Executing bash command in virtual machine server with timeout 10m
+ ping 192.168.1.1 -c5
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.018 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.029 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.033 ms
64
bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.026 ms
64 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=0.032 ms

--- 192.168.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4091ms
rtt min/avg/max/mdev = 0.018/0.027/0.033/
0.007 ms
[ 90%] Taking snapshot test_ping_2 for virtual machine client
[ 90%] Taking snapshot test_ping_2 for virtual machine server
[100%] Test test_ping_2 PASSED in 0h:0m:8s
PROCESSED TOTAL 10 TESTS IN 0h:0m:17s
UP-TO-DATE: 8
RUN SUCCESSFULLY: 2
FAILED: 0
C:\Users\Testo>

So what do we see? We can see that despite the client_unplug_nat test now has no hypervisor snapshots, the leaf-tests run as usual: because we still have the virtual machine snapshots from the client_setup_nic test.

Turns out, the no_snapshots attribute may be good for disk space saving, but sometimes at the cost of increasing time of test runs.

Try to add the no_snapshots attribute to the server_install_guest_additions and investigate which tests are going to run and when.

Now let's turn our attention to one more thing, after which we're going to state a few basic rules about setting the no_snapshots attibute.

no_snapshots in "anchor" tests is a bad idea

Before proceeding further, make sure that the client_install_guest_additions, server_install_guest_additions, test_ping_1 and test_ping_2 tests have the no_snapshots: true attribute and have been cached up.

With things arranged this way, we've managed to save quite a lot of disk space, and the test_ping_1 and test_ping_2 tests run just as quickly as before as long as we don't touch the client_setup_nic and server_setup_nic tests, so they wouldn't lose their cache. We've reached a certain point of balance: we consume not so much disk space and we don't get a lot of inconveniences with the tests runs.

But let's demonstrate what's going to happen if we push the limit too far.

Let's add the no_snapshots attribute to the client_setup_nic and server_setup_nic tests and run everything:

C:\Users\Testo> testo run tests.testo --stop_on_fail --param ISO_DIR C:\iso --assume_yes
UP-TO-DATE TESTS:
server_install_ubuntu
server_prepare
server_install_guest_additions
client_install_ubuntu
client_prepare
client_install_guest_additions
TESTS TO RUN:
server_install_guest_additions
server_setup_nic
client_install_guest_additions
client_setup_nic
test_ping_1
server_setup_nic
client_install_guest_additions
client_setup_nic
test_ping_2
...
C:\Users\Testo>

Just look at how big the TESTS TO RUN queue had got! We can see that the server_install_guest_additions, client_install_guest_additions, server_setup_nic and client_setup_nic are scheduled to run two times each! Let's figure out what's happening:

  1. We need to run two leaf tests: test_ping_1 and test_ping_2, which depend on the parent-tests client_setup_nic and server_setup_nic.
  2. Since the client_setup_nic and server_setup_nic tests don't have the hypervisor snapshots, Testo Framework is forced to find the closest tests with the hypervisor snapshots enabled.
  3. For test_ping_1, the running path is organized like this: server_install_guest_additions->server_setup_nic->client_install_guest_additions->client_setup_nic.
  4. The same path is organized for the test_ping_2 test as well! That is the only way to restore the virtual machines state so that the leaf could to be run. And therefore, some tests are scheduled to run 2 times.

Yes, we saved some more disk space, but at what cost? The tests running time increased vastly: the disadvantages are significantly greater than the benefits.

So the question is raised: are there some general rules about which tests should get the no_snapshots attribute and which tests shouldn't? I suggest the following rules:

  1. All the leaf tests (tests without any children) should get the no_snapshots attribute, since there is no damage hidden there.
  2. The intermediate tests should get the no_snapshots attribute if they are not anchor tests. A test is considered an anchor, if its results are often restored when running its children tests.
  3. Tests with multiple children should not get the no_snapshots attribute.

If we apply these rules to our tests tree, we will get this:

  1. test_ping_1 and test_ping_2 are leaf tests, so they should get the no_snapshots attribute.
  2. client_prepare and server_prepare definetely shouldn't get the no_snapshots attribute since they have more than one child.
  3. client_install_guest_additions and server_install_guest_additions should get the no_snapshots attribute, if the client_setup_nic and server_setup_nic tests are to stay cached most of the time. If they tend to lose the cache frequently, we should leave things as they are.
  4. The install_ubuntu tests are very long to run. We should probably leave the hypervisor snapshots for them even though we're not going to restore their results often. It is better to lose a little disk space but spare ourselves the Ubuntu Server installation re-runs if something went unexpected.
  5. The prepare tests may be marked with the no_snapshots attribute, no big harm.

After these optimizations we're going to get a pretty good balance between saving the disk space and saving the time for the test runs. A lot of preparatory tests have got the no_snapshots attribute, because we assume that they are not going to be run too often (just one time, ideally). The client_setup_nic and server_setup_nic tests are considered the "anchor" tests: we assume that their results will be often used when running the "actual" complex tests, which are going to be run much more often.

The rules above are not universal and you should just keep them in mind as a general approach. Of course there're situations when other rules should be applied, so don't be afraid to experiment!

Conclusions

In Testo-lang the no_snapshots feature allows you to save some disk space, but potentially compromises the tests running time. However, if this feature is well-applied, the damage to run time might be insignificant or just nonexistent at all. So before appliying this feature you should consider which tests are going to be run often and which are going to be cached most of the time.

You can find the complete test scripts here.