I was new at System Verification and was assigned to the project where everything was different. The test framework was written in Python and Gauge and I used .NET and NUnit (I knew Python at that point). CI/CD was done with Gitlab CI and I was used to Azure DevOps. Finally, remote test execution was done using “dockerized containers” on Gitlab and I was used to TeamCity and Jenkins. The last thing was the most frightening. Probably because I’d heard about all the hype around Docker.
After a gentle introduction to Docker Desktop by my mentor (who also helped me to set it up :)), I was able to see my tests running on my computer inside VNC Viewer. One might ask: “Why so many additional applications just to run tests locally?”, and I was asking myself the same. Why would someone go through so much weird configuration just to say: “Khm khm, I am running my tests using Docker containers.”
After using it for a few weeks and reading more about it, I found that Docker is a way to go for all my future projects. At that point I was just too comfy with the current tools I was using.
As potatoes is a forgotten reason for the population boom in 18th century Europe, containers are a forgotten reason for the world trade boom of the 20th century. Standard sizes and weights were introduced and thus all infrastructure for transportation and shipment was adjusted. It increased the amount of goods, speed of delivery and increased safety since everything was stored inside containers before arriving to shipyards.
Docker is de facto containerization of software today. Everything that applies to real world containers apply to Docker as well. Initial investment might be slightly higher, but long-term benefits are immense. Docker is used heavily in DevOps today, but you might want to start using it for your test automation as well. We, at System Verification, are using Docker on several projects now and have great experiences with it.
Without further ado let me list the pros and cons of using Docker for testing.
Pros:
Cons:
How to set it up?
For this setup I will not use Selenium Hub with Nodes, but only one web driver and run it inside a Docker container. For running the test, I will use Visual Studio on Windows, but you can follow with minor modifications for other operating systems as well. In order to skip a part with writing some basic test framework, you can clone one from my GitHub page
You can use git clone https://github.com/emiride/TestAutomationBase.git The test framework is written in C# using dotnetcore 3.1 and NUnit framework. It follows page object pattern and uses Singleton per thread as a strategy for web driver. It is built to test example website: http://automationpractice.com/index.php
Feel free to use it for any need you have.
Inside your preferred terminal just run docker run -p 4444:4444 -p 5900:5900 selenium/standalone-chrome-debug
Another option is to navigate to the folder where docker-compose.yml file is and just type docker-compose up. This will have the same effect. It will spin up the container and expose ports 4444 and 5900.
The solution itself contains only one test, RegistrationTest, which checks if the email entered in registration form is shown in the second step of registration. Quite simple.
Now, when you have docker installed and you run commands in terminal, you can just go to Test -> Test Explorer to see the test. Right click on it -> Run. It will show that it is working and voila, test passed. But you did not see anything. How do you know what is going on?
Indeed, you will need some software, like VNC Viewer to see what is going on inside container.
After you install VNC Viewer, you need to add a connection to the running container. Remember that we have two ports that are mapped to localhost. Port 4444 is for running tests and port 5900 is for watching the execution. After going to File -> New connection…, you need to specify localhost:5900 for VNC Server. Now you can open the container and see what is happening inside when running.
What you are seeing is a minimal operating system, just so the web driver can be run in it.
There are several benefits of running your tests in dockerized containers and I will try to list them and give examples of their uses.
It doesn’t only mean that you can have same setup on Windows, macOS and Linux. It also means that you can easily run your tests on every popular cloud provider with minimal additional configuration. Nowadays every DevOps tool supports Docker.
You can say that this setup is not easy, but I assure you that it is a lot easier and more stable than keeping all web drivers updated on every machine. Not to mention onboarding new team members, where they only need to install two additional software.
This is a little lie, because technically, when you run your tests inside containers, you are running them remotely. You can even set up Selenium Grid on your machine and run multiple nodes in parallel locally. This is especially good if you are tight on budget and time, so you can speed up test execution on your local machine.
How many times your regression would show 30-40 tests failing, just so you can rerun them and find that they are all passing on your machine. Then you must go through logs and check what happened last night.
With Docker, you can ensure that screen resolutions, versions, rights and privileges in remote environment is identical to your environment. When test fails, you can be sure that it will fail locally as well.
When you start automating tests, you usually do not think that your regression might end up running whole night after one year. No matter if you are using machines on site or in cloud, you will need to increase the number of them eventually. Using docker configuration is a lot smaller than with traditional approach.
I hope that these reasons are enough to convince you that Docker is cheaper, more maintainable and a simpler solution for you and your clients. System Verification is currently using this approach for a few of its clients and the results are just as described.
Happy Testing!
Author: Emir Hodzic, Test Engineer