Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/340.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/string/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何使用python从字符串中提取url?_Python_String_Url_Extract - Fatal编程技术网

如何使用python从字符串中提取url?

如何使用python从字符串中提取url?,python,string,url,extract,Python,String,Url,Extract,例如: string = "This is a link http://www.google.com" 我怎样才能提取'http://www.google.com' ? (每个链接将采用相同的格式,即“http://”)实现这一点的方法可能很少,但最干净的方法是使用regex >>> myString = "This is a link http://www.google.com" >>> print re.search("(?P<url>ht

例如:

string = "This is a link http://www.google.com"
我怎样才能提取'http://www.google.com' ?


(每个链接将采用相同的格式,即“http://”)

实现这一点的方法可能很少,但最干净的方法是使用regex

>>> myString = "This is a link http://www.google.com"
>>> print re.search("(?P<url>https?://[^\s]+)", myString).group("url")
http://www.google.com

为了在通用字符串中查找web URL,可以使用

一个简单的用于URL匹配的正则表达式应该适合您的情况,如下所示

    regex = r'('

    # Scheme (HTTP, HTTPS, FTP and SFTP):
    regex += r'(?:(https?|s?ftp):\/\/)?'

    # www:
    regex += r'(?:www\.)?'

    regex += r'('

    # Host and domain (including ccSLD):
    regex += r'(?:(?:[A-Z0-9][A-Z0-9-]{0,61}[A-Z0-9]\.)+)'

    # TLD:
    regex += r'([A-Z]{2,6})'

    # IP Address:
    regex += r'|(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'

    regex += r')'

    # Port:
    regex += r'(?::(\d{1,5}))?'

    # Query path:
    regex += r'(?:(\/\S+)*)'

    regex += r')'
如果您想更准确地说,在TLD部分中,您应该确保TLD是有效的TLD(请参阅此处的完整有效TLD列表:):

然后,您只需编译前一个正则表达式并使用它查找可能的匹配项:

    import re

    string = "This is a link http://www.google.com"

    find_urls_in_string = re.compile(regex, re.IGNORECASE)
    url = find_urls_in_string.search(string)

    if url is not None and url.group(0) is not None:
        print("URL parts: " + str(url.groups()))
        print("URL" + url.group(0).strip())
对于字符串“This is a link”,将输出:

    URL parts: ('http://www.google.com', 'http', 'google.com', 'com', None, None)
    URL: http://www.google.com
如果使用更复杂的URL更改输入,例如“这也是一个URL,但不再是”,则输出将为:

    URL parts: ('https://www.host.domain.com:80/path/page.php?query=value&a2=v2#foo', 'https', 'host.domain.com', 'com', '80', '/path/page.php?query=value&a2=v2#foo')
    URL: https://www.host.domain.com:80/path/page.php?query=value&a2=v2#foo

注意:如果要在单个字符串中查找更多URL,仍然可以使用相同的正则表达式,但只需使用而不是。

还有另一种方法可以轻松地从文本中提取URL。您可以使用urlextract来完成此操作,只需通过pip安装即可:

pip install urlextract
然后你可以这样使用它:

from urlextract import URLExtract

extractor = URLExtract()
urls = extractor.find_urls("Let's have URL stackoverflow.com as an example.")
print(urls) # prints: ['stackoverflow.com']
您可以在我的github页面上找到更多信息:


注意:它从iana.org下载TLD列表,让您随时了解最新情况。但是,如果该程序不能访问互联网,那么它就不适合您。

这将提取所有带有参数的URL,不知何故,上述所有示例对我来说都不起作用

import re

data = 'https://net2333.us3.list-some.com/subscribe/confirm?u=f3cca8a1ffdee924a6a413ae9&id=6c03fa85f8&e=6bbacccc5b'

WEB_URL_REGEX = r"""(?i)\b((?:https?:(?:/{1,3}|[a-z0-9%])|[a-z0-9.\-]+[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)/)(?:[^\s()<>{}\[\]]+|\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\))+(?:\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\)|[^\s`!()\[\]{};:'".,<>?«»“”‘’])|(?:(?<!@)[a-z0-9]+(?:[.\-][a-z0-9]+)*[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)\b/?(?!@)))"""
re.findall(WEB_URL_REGEX, text)
重新导入
数据收集https://net2333.us3.list-some.com/subscribe/confirm?u=f3cca8a1ffdee924a6a413ae9&id=6c03fa85f8&e=6bbacccc5b'

WEB\u URL\u REGEX=r”“”(?i)\b((?:https?:(?:/{1,3}}|[a-z0-9%]))|[a-z0-9.\-]+[.](简称:::::)com(124)网站网站(124)网络(124)网站(124)网站(124)网站网站(124)互联网(124)网络(124)网站(124)网站(124)网站(124)网络(124)网络(124)网络(124)网站(124)网站(124)网站(124)网站(124)网站(124)网络)网站(124)网站(124)网站(124)互联网)网站(124)网站(网络)网站(124)运营商)亚洲亚洲亚洲(商业运营商(124)运营商)猫猫(124)运营商)猫猫(猫猫)运营运营商(猫猫)合合合合合合运营商(124)运营商(124)运营商(124)运营商(124)运营商(124)运营商)方方(124)运营商(124)信息(124)信息(124)方方(124)信息(124)互联网(124)方)方方方方方)方方(124)电电电电电运营商(124)方)《卡本本斯》的瓦瓦本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本本| cv | cx | cy | cz | dd | de | dj | dk | dm | do | dz | ec | ee | eg | eh | er | es | et | eu fi 124; fj | fk | fm foGf| G| G| G| GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG卡本尼·凯凯撒·基本本本本本尼·卡卡本尼·卡卡本尼·卡卡本尼·卡卡本尼·卡本尼·卡卡本尼·卡卡本尼·卡卡本尼·卡卡本尼·卡卡本尼·卡本本尼·卡本尼·卡本尼·卡本尼·卡本本尼·卡本尼·卡本本尼·卡本本尼·卡本尼·卡本本本尼·卡本本尼·卡本本尼·卡本本尼·卡本本本本尼·卡本本尼·卡本本本尼·卡本尼·卡本本本本尼·卡本本尼·卡本尼·卡本尼·卡本尼·卡本本本本本本尼·卡本本本本本本尼·卡本尼·卡本本尼·卡本本本本尼·卡本本本尼·卡本尼·卡本本本本本本本本尼·本本本本本尼·本本本本本本尼“mv”奈奈奈奈何,奈奈奈奈奈奈奈奈奈何,奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈奈rw | sa | sb | sc | sd | se|本周四的赛方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方方英国-英国(1244)UUU124英国(1244)UUU124英国(1244)UUUU124英国英国(1244)美国(1244)美国(1244)英国(1244)美国(1244)UUUUUUUUUU1244)UUUUUUUUUU1244英国(1244)UUUUUUUUUUUUUUUUUUUUUUUUUUUUU1244-UUUUUUUUUUUUUUUUUUUUUUUUUUUUU1244.-1244-UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU12.的(124测试测试测试测试测试测试(1244-UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU1244+\)[^\s()]*?\)\([^\s]+?\)\[^\s`!()\[\]{};:“,”,«»“)))\124;(?:(?您可以使用以下模式从字符串中提取任何URL

一,

>>重新导入
>>>string=“这是一个链接http://www.google.com"
>>>pattern=r'[(http://)\\w]*?[\w]*\.-/\w]*\.\w*[(/{1})]?[\-\./\w]*[(/{1,})]?'
>>>重新搜索(模式、字符串)
http://www.google.com
>>>TWEET=('New Pybites article:Module of the Week-Requests cache'
'对于重复的API调用-http://pybit.es/requests-cache.html '
“#python#API”)
>>>重新搜索(模式、推特)
http://pybit.es/requests-cache.html
>>>tweet=('Pybites My Reading List | 12条生活规则#书籍'
“那会扩展思维!”
'http://pbreadinglist.herokuapp.com/books/'
“Tveqdaaqbaj#.XVOriU5z2tA.twitter”
“#心理学#哲学”)
>>>关于findall(模式、推特)
['http://pbreadinglist.herokuapp.com/books/TvEqDAAAQBAJ#.XVOriU5z2tA.twitter']
为了将上述模式提升到下一个层次,我们还可以通过以下方式检测包含URL的哈希标记

二,

  • 上述获取URL和hashtag的示例可以缩短为

    >>pattern=r'((?:| http)\S+)
    >>>关于findall(模式、推特)
    [“#书籍”,http://pbreadinglist.herokuapp.com/books/TvEqDAAAQBAJ#.XVOriU5z2tA.twitter“,”心理学“,”哲学“]
    
  • 下面的模式可以匹配两个以“.”分隔的字母数字作为URL

    >>pattern=pattern=r'(?:http:/)?\w+\.\S*[^.\S]'
    >>>tweet=('PyBites My Reading List | 12条生活规则#书籍'
    “那会扩展思维!”
    “www.google.com/telephone/wire…”
    'http://pbreadinglist.herokuapp.com/books/'
    “Tveqdaaqbaj#.XVOriU5z2tA.twitter”
    "http://-www.pip.org "
    “google.com”
    “twitter.com”
    “facebook.com”
    “#心理学#哲学”)
    >>>关于findall(模式、推特)
    ['www.google.com/telephone/wire','http://pbreadinglist.herokuapp.com/books/TvEqDAAAQBAJ#.XVOriU5z2tA.twitter“,”www.pip.org“,”google.com“,”twitter.com“,”facebook.com“]
    
    您可以使用数字1和2模式尝试任何复杂的URL。 要了解有关python中re模块的更多信息,请查看以下内容 用真正的Python


    干杯!您可以查看以下答案:当我尝试该解决方案时,不会返回任何结果。如果这是针对原始文本文件(如您的问题中所述),您可能会检查这个答案:查看可能的重复对于许多真实场景来说太粗糙了。对于
    ftp://
    URL和
    mailto:
    URL等,它完全失败,并且会天真地从
    抓取尾部部分(即通过“单击”向上)@tripleee问题不是解析HTML,而是在一个字符串中找到一个URL,该字符串将始终是
    http
    格式。因此,这一点非常有效。但是,是的,对于人们来说,如果他们在这里解析HTML或类似内容,了解你在说什么非常重要。因此,正则表达式最终是
    (?:(https?| s?ftp):\/\/)(?:www\)((?:(?:[A-Z0-9][A-Z0-9-]{0,61}[A-Z0-9]\+)([A-Z]{2,6}){124;(?:\ d{1,3}\.\d{1,3}.\d{1,3}.\d{1,3}))(?:(\d{1,5})(?:(:(\/\S+)
    )请注意,现在还包括一些有趣的结尾,如,这将不会被长时间捕捉到
    from urlextract import URLExtract
    
    extractor = URLExtract()
    urls = extractor.find_urls("Let's have URL stackoverflow.com as an example.")
    print(urls) # prints: ['stackoverflow.com']
    
    import re
    
    data = 'https://net2333.us3.list-some.com/subscribe/confirm?u=f3cca8a1ffdee924a6a413ae9&id=6c03fa85f8&e=6bbacccc5b'
    
    WEB_URL_REGEX = r"""(?i)\b((?:https?:(?:/{1,3}|[a-z0-9%])|[a-z0-9.\-]+[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)/)(?:[^\s()<>{}\[\]]+|\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\))+(?:\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\)|[^\s`!()\[\]{};:'".,<>?«»“”‘’])|(?:(?<!@)[a-z0-9]+(?:[.\-][a-z0-9]+)*[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)\b/?(?!@)))"""
    re.findall(WEB_URL_REGEX, text)
    
    >>> pattern = r'[(http://)|\w]*?[\w]*\.[-/\w]*\.\w*[(/{1})]?[#-\./\w]*[(/{1,})]?|#[.\w]*'
    >>> re.findall(pattern, tweet)
    ['#books', http://pbreadinglist.herokuapp.com/books/TvEqDAAAQBAJ#.XVOriU5z2tA.twitter', '#psychology', '#philosophy']