I have a problem - I am using the selenium (firefox) web driver to open a webpage, click a few links etc. then capture a screenshot.
My script runs fine from the CLI, but when run via a cronjob it is not getting past the first find_element() test. I need to add some debug, or something to help me figure out why it is failing.
Basically, I have to click a 'log in' anchor before going to the login page. The construct of the element is:
<a class="lnk" rel="nofollow" href="/login.jsp?destination=/secure/Dash.jspa">log in</a>
I am using the find_element By LINK_TEXT method:
login = driver.find_element(By.LINK_TEXT, "log in").click()
I am a bit of a Python Noob, so I am battling with the language a bit...
A) How do I check that the link is actually being picked up by python? Should I use try/catch block?
B) Is there a better/more reliable way to locate the DOM element than by LINK_TEXT? E.g. In JQuery, you can use a more specific selector $('a.lnk:contains(log in)').do_something();
I have solved the main problem and it was just finger trouble - I was calling the script with incorrect parameters - Simple mistake.
I'd still like some pointers on how to check whether an element exists. Also, an example/explanation of implicit / explicit Waits instead of using a crappy time.sleep() call.
Cheers, ns
a)
from selenium.common.exceptions import NoSuchElementException
def check_exists_by_xpath(xpath):
try:
webdriver.find_element_by_xpath(xpath)
except NoSuchElementException:
return False
return True
b) use xpath - the most reliable. Moreover you can take the xpath as a standard throughout all your scripts and create functions as above mentions for universal use.
UPDATE: I wrote the initial answer over 4 years ago and at the time I thought xpath would be the best option. Now I recommend to use css selectors. I still recommend not to mix/use "by id", "by name" and etc and use one single approach instead.