Unlike the usual, I am having trouble while fetching elements from the website below.
The element hierarchy is as follows:
My goal is to fetch the rows (data-row-key='n') inside the class "rc-table-tbody".
Here below is my Python script:
chromeOptions = Options()
chromeOptions.add_argument("headless")
driver = webdriver.Chrome(chrome_options=chromeOptions, executable_path=DRIVER_PATH)
driver.get('https://www.binance.com/en/my/orders/exchange/usertrade')
None of these below works (gives either unable to locate element or timeout):
elements = driver.find_element_by_class_name("rc-table-tbody")
elements = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CLASS_NAME, "rc-table-tbody")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.CLASS_NAME, "rc-table-tbody")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "selector_copied_from_inspect")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.XPATH, "xpath_copied_from_inspect")))
I appreciate any help, thanks!
Edit: I guess the problem is related to cookies. The URL I was trying to fetch is a page after a login in Binance.com and I am already logged in on my Crome browser. Therefore, I assumed that driver will use the current cookies of the real Chrome browser and there wouldn't need for login. However, when I removed "headless" parameter, Selenium poped the login page, instead of the page I was trying to scrape.
How can I get the current cookies of the browser, in order to make Selenium access the exact page I am trying to scrape?
