Parallelising your Playwright tests
Run your playwright tests 12 times faster
So you have written your Playwright tests and they are chugging along fine. Over a while, the number of test cases increased. The time it takes to run the tests starts to increase steadily. What do you do !! How do you get it under control ??
Let's assume for argument's sake you have 500 playwright tests for the app.
First of all, I would break down the 500 tests into 4 or 5 separate categories depending on how the features are tested. So, 5 blocks of 100 tests running in parallel. Any failures will ensure that you will have to retry one block of 100 tests instead of 500 tests.
If there are authentication flows involved, reuse the auth via fixtures to bypass the login every time. This should save a significant amount of time per UI test.
Dockerise the environment
- One can use Github actions to parallelise tests
Secondly, you can dockerize the environment and run the tests on a c5.4x large machine (or any memory-optimised EC2 instance) and run 12 parallel tests. One can improve the costs by using a mix of on-demand & spot instances for the workload at hand.
FROM mcr.microsoft.com/playwright:v1.37.0 RUN mkdir -p /opt/google/chrome RUN ln -s /ms-playwright/chromium-1048/chrome-linux/chrome /opt/google/chrome/chrome RUN npm uninstall -g yarn && \ apt-get remove -y nodejs python3 --purge && \ apt-get update && \ apt-get install -y openjdk-11-jre uuid-runtime && \ apt-get autoremove -y && \ apt-get clean -y && \ rm -rf /var/lib/apt/lists/* # Remove unused browser RUN rm -rf /ms-playwright/firefox-* /ms-playwright/webkit-* # Install nvm with node and npm RUN mkdir -p /usr/local/nvm ENV NVM_DIR /usr/local/nvm ENV NODE_VERSION v16.19.1 RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.2/install.sh | bash RUN /bin/bash -c "source $NVM_DIR/nvm.sh && nvm install $NODE_VERSION && nvm use --delete-prefix $NODE_VERSION" # Add node and npm to the PATH ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION/bin ENV PATH $NODE_PATH:$PATH WORKDIR /app RUN npm install -g yarn --loglevel silent RUN wget https://repo.maven.apache.org/maven2/io/qameta/allure/allure-commandline/2.17.2/allure-commandline-2.17.2.tgz RUN tar -xvzf allure-commandline-2.17.2.tgz && rm -rf allure-commandline-2.17.2.tgz ENV PATH /app/allure-2.17.2/bin:$PATH RUN yarn install --cache-folder ~/.cache/yarn --frozen-lockfile --ignore-scripts
Once you are done with these changes, you can look into mocking the API responses for 3rd party integrations. This will help with getting rid of one of the factors of flakiness
- Need to ensure the 3rd party mocks are still valid. You can use ZOD to ensure the response structure remains the same.
Improving and optimising the pipeline execution time is a continuous process.
Thanks.