Python 如何使用Selenium从instagram获取帖子url,因为每当我向下滚动时,它都会动态变化?

Python 如何使用Selenium从instagram获取帖子url,因为每当我向下滚动时,它都会动态变化?,python,selenium,instagram,Python,Selenium,Instagram,我试图在一个帐户上抓取Instagram帖子,但每当我告诉它向下滚动时,以前的链接就会消失,新的链接也会出现,但不会全部出现在同一位置,现在它总是在1100篇帖子中只抓到29篇 while(count<10): for i in range(1,2): #.execute_script("window.scrollTo(0, document.body.scrollHeight);") self.brow

我试图在一个帐户上抓取Instagram帖子,但每当我告诉它向下滚动时,以前的链接就会消失,新的链接也会出现,但不会全部出现在同一位置,现在它总是在1100篇帖子中只抓到29篇

 while(count<10):
        for i in range(1,2):
            #.execute_script("window.scrollTo(0, document.body.scrollHeight);")
            self.browser.execute_script('window.scrollTo(0,document.body.scrollHeight)')
            print('.', end="",flush=True)
            time.sleep(2)

        elements = self.browser.find_elements_by_xpath("//div[@class='v1Nh3 kIKUG  _bz0w']")
        hrefElements = self.browser.find_elements_by_xpath("//div[@class='v1Nh3 kIKUG  _bz0w']/a")

        elements_link = [x.get_attribute("href") for x in hrefElements]

        i = 1
        unique = 1
        text_file = open("Passed.txt", "r")
        lines = text_file.readlines()
        text_file.close()
        
        for elements in elements_link:
            print(str(i)+'.',end ="",flush=True)
            found = self.found(elements,lines)
           
            
            if found==True:
                pass
            else:
                with open('Passed.txt','a') as f:
                    f.write(elements+'\n') 
                unique+=1
            i+=1
        count+=1
        
        print('-----------------------------------------------')
        print('No. of unique Posts Captured : '+ str(unique))
        print('-----------------------------------------------')

        
`

我正在努力捕捉1100个帖子

下面是每次向下滚动时发生的情况

然后向下滚动此更改为


您应该首先找到链接,然后向下滚动页面,以便保存链接、滚动页面并获得滚动页面时显示的链接。这样,您还可以保存滚动页面时消失的链接。这里有一个例子:

wait = WebDriverWait(self.browser, 10)
links = []
number_of_posts = 1100

while True:
   hrefElements = wait.until(ec.visibility_of_all_elements_located((By.XPATH, "//div[@class='v1Nh3 kIKUG  _bz0w']/a")))

   elements_link = [x.get_attribute("href") for x in hrefElements]
   for link in elements_link:
       if link not in links:
           links.append(link)

   self.browser.execute_script('window.scrollTo(0,document.body.scrollHeight)')
   self.browser.implicitly_wait(5)

  if len(links) >= number_of_posts:
      break

links = links[:number_of_posts]
with open('Passed.txt','a') as f:
    for link in links:
        f.write(elements+'\n') 
         

   

为了改进您的问题并使其可重复,请提供一些代码。好的,请稍等。。让我来设置一下嗨,我试过了,但它只是不断地反复提供相同的链接(嘿,Thankkss…它实际上做了一些调整!!不客气!如果你想知道要更改什么,我将编辑答案,确保uhm u在这个
all\u link=[]中添加元素中的链接\u link:If link not in all\u link:all\u link.append(link)self.browser.execute_script('window.scrollTo(0,document.body.scrollHeight)')self.browser.implicity_wait(5)
wait = WebDriverWait(self.browser, 10)
links = []
number_of_posts = 1100

while True:
   hrefElements = wait.until(ec.visibility_of_all_elements_located((By.XPATH, "//div[@class='v1Nh3 kIKUG  _bz0w']/a")))

   elements_link = [x.get_attribute("href") for x in hrefElements]
   for link in elements_link:
       if link not in links:
           links.append(link)

   self.browser.execute_script('window.scrollTo(0,document.body.scrollHeight)')
   self.browser.implicitly_wait(5)

  if len(links) >= number_of_posts:
      break

links = links[:number_of_posts]
with open('Passed.txt','a') as f:
    for link in links:
        f.write(elements+'\n')