Selenium is known to be one of the most popular tools for software automation testing. And the reasons are not far-fetched. It supports automation tests in multiple browsers, and codes can be written in several programming languages such as Python, Java, Perl, Ruby, etc. Since the inception of Selenium, it has proven to be a game-changer in the automation testing world. Imagine being able to login into a website, click links on web pages, add comments to posts, send tweets, etc., from writing test scripts in your IDE. Cool right? That’s what Selenium brings to the table. 

Selenium has many reasons to be loved as a tool, but it has its fair share of challenges like every other tool. In this article, we will discuss some of the challenges you will most likely face as a Selenium automation tester and how you can handle them. So let’s get into it. 

1. Flaky Test – False Positive and False Negative results 

False-positive is a scenario where your test results in no issue with your software, while in the real sense, there is. On the other hand, a false negative is a situation where you get a result that shows an error in execution, but in reality, there is none. Both false positives and false negatives are common challenges any automation tester would commonly face, including Selenium testers. Tests, where there are false positives or false negatives, are called flaky tests. 

It becomes even more serious when you run thousands of test scripts at once, and some part of the test is flaky. This gives the automation tester an imaginary overview of what the software really is. If left to linger, it can be quite costly. So, as a software automation tester, flake tests are something you should always look out for and know how to handle them. 

First, it is critical to document flakiness when they occur. You cannot possibly handle something you are not aware of. Afterward, it is advisable to use a framework for your automation task. This would prove to be much better than relying on the individual test script from your suite, which is compromised. 

2. Tests running ahead of Browser rendering

This is another common challenge where the test script has implemented several steps, and the browser is yet to execute them. It becomes precarious when the next test execution is dependent on the previous. Since the browser has not executed the previous and the test is executing the coming once already, it ultimately leads to a test failure. 

This problem is often caused by a slow internet connection or where some web pages are appreciably more bogus than others, perhaps due to higher dynamic elements. If your test script would require moving from one page to another, it is good practice to use the wait or some expected conditions in the code before executing the next test. This will give some buffer for the script to load the slowest web page before proceeding. 

Wait commands can be implicit or explicit. While you are at liberty to use any of them, it is advisable to stick with one type throughout your code. Some people say the explicit wait is better, but you could take out time to read up on both. Check their pros and cons before deciding on the one to use in your situation. 

3. Cross-browser testing incompatibility. 

Sometimes, elements in a browser can be rendered differently in another browser. When this happens, there is a possibility that it can affect the execution of your test scripts. There are numerous browsers out there, and it is quite tedious to perform tests for all of them. What you can do is focus on the most popular ones: Chrome, Firefox, Edge, Safari, Internet Explorer, and Opera. Once your test scripts execute well on these, you can be sure that 99% of your end-users will have no problem. Taking it to the next level, you can ensure further compatibility by testing based on browser versions. Again, check for the most popular versions being used and perform the test based on the different operating systems. Cross-browser testing can be a big challenge for automation testers but is rewarding when done well. 

4. Mobile app testing

Selenium is a great tool for software testing. However, it cannot be used for mobile app testing. This is a bottleneck that Selenium testers will face when working. If the product has a mobile app, Selenium cannot be used to perform quality assurance tests on them. As an alternative, you will need to turn your attention to mobile app testing tools such as Appium. Since more and more organizations are developing mobile apps, Appium users are staggeringly increasing. The good news is there are now tools based on Selenium and Appium, allowing you to maximize the possibilities of both worlds in a sweep. An example of such a tool is TestProject. You can enroll in an online Selenium certification training and get insight into how to use these tools. 

5. Pop-up Handling in Web Pages

Many web pages now have pop-ups, whether on their homepage or when a user is just about to bounce off the page. Popups are great but can be a nightmare for Selenium automation testers. In reality, Selenium WebDriver cannot automatically handle popups. Thus if your test scripts have click commands and a popup shows up, the commands in Selenium WebDriver will be inactive, handicapping the tests. Of course, a pop-up is a new object, sitting right in front of the web page and rendering all the elements on the web page useless. 

As earlier mentioned, Selenium WevDriver cannot handle pop-ups but can handle alerts. To solve this popup challenge, you can import the Alert() module, which prompts you when an alert occurs. You can then choose to dismiss it, accept it or get texts from the alert. 

In conclusion, we have discussed 5 common issues you will face as a software tester. However, this should not debar you from learning Selenium as it is overall one of the best automation testing tools in the market. If you are looking to learn, check out for online Selenium training and check out their curriculum. You should also confirm that they have real-time projects that will help you solidify your learnings.