Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/306.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 搜索允许在字符串的任何位置出现一个不匹配的字符串_Python_String_Pattern Matching_String Matching_Dna Sequence - Fatal编程技术网

Python 搜索允许在字符串的任何位置出现一个不匹配的字符串

Python 搜索允许在字符串的任何位置出现一个不匹配的字符串,python,string,pattern-matching,string-matching,dna-sequence,Python,String,Pattern Matching,String Matching,Dna Sequence,我正在研究长度为25的DNA序列(见下面的例子)。我有一份230000人的名单,需要查找整个基因组中的每个序列(弓形虫寄生虫)。我不确定基因组有多大,但比230000个序列长得多 我需要查找25个字符的每个序列,例如(agcccccatgattgaacagatcat) 基因组的格式为连续字符串,即(catgggaggctttgcggagcctggaggcgggaggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcgg

我正在研究长度为25的DNA序列(见下面的例子)。我有一份230000人的名单,需要查找整个基因组中的每个序列(弓形虫寄生虫)。我不确定基因组有多大,但比230000个序列长得多

我需要查找25个字符的每个序列,例如(
agcccccatgattgaacagatcat

基因组的格式为连续字符串,即(
catgggaggctttgcggagcctggaggcgggaggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcggcgtt….

我不在乎它在哪里被发现或被发现多少次,只在乎它是否被发现。
我想这很简单--

但我也知道如何在任何位置(但只有一个位置)找到定义为错误(不匹配)的接近匹配,并在序列中记录该位置。我不知道该怎么做。我唯一能想到的就是使用通配符并在每个位置使用通配符执行搜索。即搜索25次

比如说,

AGCCTCCCATGATTGAACAGATCAT    
AGCCTCCCATGATAGAACAGATCAT
位置13处不匹配的紧密匹配

速度不是一个大问题,因为我只做了3次,虽然如果速度快的话会更好

有一些程序可以做到这一点——查找匹配和部分匹配——但我正在寻找一种在这些应用程序中无法发现的部分匹配类型

下面是perl的一篇类似文章,尽管他们只是比较序列,而不是搜索连续字符串:


您可以使用python内置的功能,通过正则表达式匹配进行搜索

python中的re模块

正则表达式引物

您可能会发现各种各样的例程都有一定的用处。

这提示了。这里字符串相似性的问题是,您需要针对230000个序列的连续字符串进行测试;因此,如果您将25个序列中的一个与连续字符串进行比较,您将得到非常低的相似性

如果计算25个序列和连续字符串之间最长的公共子序列,那么如果长度相同,就可以知道它是否在字符串中

>>> import re
>>> seq="AGCCTCCCATGATTGAACAGATCAT"
>>> genome = "CATGGGAGGCTTGCGGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTTGCGGAGTGCGGAGCCTGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTT..."
>>> seq_re=re.compile('|'.join(seq[:i]+'.'+seq[i+1:] for i in range(len(seq))))

>>> seq_re.findall(genome)  # list of matches
[]  

>>> seq_re.search(genome) # None if not found, otherwise a match object
这一个在第一次匹配时停止,因此当有多个匹配时可能会快一点

>>> print "found" if any(seq_re.finditer(genome)) else "not found"
not found

>>> print "found" if seq_re.search(genome) else "not found" 
not found

>>> seq="CAT"
>>> seq_re=re.compile('|'.join(seq[:i]+'.'+seq[i+1:] for i in range(len(seq))))
>>> print "found" if seq_re.search(genome) else "not found"
found
对于长度为10000000的基因组,单线程扫描230000个序列需要大约2.5天,因此您可能希望将任务拆分为几个核心/CPU

运行此算法时,您始终可以开始实施更高效的算法:)

如果您希望搜索单个删除或添加的元素,请将regexp更改为

>>> seq_re=re.compile('|'.join(seq[:i]+'.{0,2}'+seq[i+1:] for i in range(len(seq))))

在您继续阅读之前,您有没有看过

您似乎希望找到具有一个替换错误和零插入/删除错误(即汉明距离为1)的近似匹配

如果您有一个汉明距离匹配功能(例如,请参阅Ignacio提供的链接),您可以这样使用它来搜索第一个匹配:

any(Hamming_distance(genome[x:x+25], sequence) == 1 for x in xrange(len(genome)))
但这将是相当缓慢的,因为(1)汉明距离函数在第二次替换错误后将继续研磨(2)在失败后,它将光标向前移动一步,而不是根据所看到的内容向前跳跃(就像Boyer-Moore搜索所做的那样)

您可以使用如下函数克服(1):

def Hamming_check_0_or_1(genome, posn, sequence):
    errors = 0
    for i in xrange(25):
        if genome[posn+i] != sequence[i]:
            errors += 1
            if errors >= 2:
                return errors
    return errors 
注意:这不是Pythonic,而是Cic,因为您需要使用C(可能通过Cython)来获得合理的速度

Navarro和Raffinot(谷歌“Navarro-Raffinot nrgrep”)已经完成了一些关于位并行近似Levenshtein搜索的跳跃工作,这可能适用于Hamming搜索。请注意,位并行方法对查询字符串长度和字母表大小有限制,但您的方法分别为25和4,所以不会出现问题。更新:如果字母表大小为4,跳过可能没有多大帮助

当你在谷歌上搜索汉明距离搜索时,你会注意到很多关于在硬件上实现它的东西,而不是在软件上。这是一个很好的暗示,也许你提出的任何算法都应该用C或其他编译语言实现

更新:位并行方法的工作代码

我还提供了一个简化的方法来帮助进行正确性检查,并且我还打包了Paul的re代码的一个变体,以进行一些比较。请注意,使用re.finditer()可以提供不重叠的结果,这可能会导致distance-1匹配对精确匹配产生阴影;请参阅我的最后一个测试用例

位并行方法具有以下特点:保证线性行为O(N),其中N是文本长度。注意:naive方法是O(NM),regex方法也是(M是模式长度)。Boyer-Moore方法将是最坏情况O(NM)和预期O(N)。当输入需要缓冲时,也可以很容易地使用位并行方法:一次可以输入一个字节或一个兆字节;没有前瞻性,缓冲区边界没有问题。最大的优点是:当用C编码时,简单的每输入字节代码的速度

缺点:模式长度有效地限制在快速寄存器中的位数,例如32或64。在这种情况下,图案长度为25;没问题。它使用与模式中不同字符数成比例的额外内存(S_表)。在这种情况下,“字母表大小”仅为4;没问题

详细信息来自。这里的算法是使用Levenshtein距离进行近似搜索。为了转换为使用汉明距离,我只是(!)删除了语句2.1中处理插入和删除的部分。你会注意到很多对“R”的引用都带有“d”上标。“d”是距离。我们只需要0和1。这些“R”成为下面代码中的R0和R1变量

# coding: ascii

from collections import defaultdict
import re

_DEBUG = 0


# "Fast Text Searching with Errors" by Sun Wu and Udi Manber
# TR 91-11, Dept of Computer Science, University of Arizona, June 1991.
# http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.8854

def WM_approx_Ham1_search(pattern, text):
    """Generate (Hamming_dist, start_offset)
    for matches with distance 0 or 1"""
    m = len(pattern)
    S_table = defaultdict(int)
    for i, c in enumerate(pattern):
        S_table[c] |= 1 << i
    R0 = 0
    R1 = 0
    mask = 1 << (m - 1)
    for j, c in enumerate(text):
        S = S_table[c]
        shR0 = (R0 << 1) | 1
        R0 = shR0 & S
        R1 = ((R1 << 1) | 1) & S | shR0
        if _DEBUG:
            print "j= %2d msk=%s S=%s R0=%s R1=%s" \
                % tuple([j] + map(bitstr, [mask, S, R0, R1]))
        if R0 & mask: # exact match
            yield 0, j - m + 1
        elif R1 & mask: # match with one substitution
            yield 1, j - m + 1

if _DEBUG:

    def bitstr(num, mlen=8):
       wstr = ""
       for i in xrange(mlen):
          if num & 1:
             wstr = "1" + wstr
          else:
             wstr = "0" + wstr
          num >>= 1
       return wstr

def Ham_dist(s1, s2):
    """Calculate Hamming distance between 2 sequences."""
    assert len(s1) == len(s2)
    return sum(c1 != c2 for c1, c2 in zip(s1, s2))

def long_check(pattern, text):
    """Naively and understandably generate (Hamming_dist, start_offset)
    for matches with distance 0 or 1"""
    m = len(pattern)
    for i in xrange(len(text) - m + 1):
        d = Ham_dist(pattern, text[i:i+m])
        if d < 2:
            yield d, i

def Paul_McGuire_regex(pattern, text):
    searchSeqREStr = (
        '('
        + pattern
        + ')|('
        + ')|('.join(
            pattern[:i]
            + "[ACTGN]".replace(c,'')
            + pattern[i+1:]
            for i,c in enumerate(pattern)
            )
        + ')'
        )
    searchSeqRE = re.compile(searchSeqREStr)
    for match in searchSeqRE.finditer(text):
        locn = match.start()
        dist = int(bool(match.lastindex - 1))
        yield dist, locn


if __name__ == "__main__":

    genome1 = "TTTACGTAAACTAAACTGTAA"
    #         01234567890123456789012345
    #                   1         2

    tests = [
        (genome1, "ACGT ATGT ACTA ATCG TTTT ATTA TTTA"),
        ("T" * 10, "TTTT"),
        ("ACGTCGTAAAA", "TCGT"), # partial match can shadow an exact match
        ]

    nfailed = 0
    for genome, patterns in tests:
        print "genome:", genome
        for pattern in patterns.split():
            print pattern
            a1 = list(WM_approx_Ham1_search(pattern, genome))
            a2 = list(long_check(pattern, genome))
            a3 = list(Paul_McGuire_regex(pattern, genome))
            print a1
            print a2
            print a3
            print a1 == a2, a2 == a3
            nfailed += (a1 != a2 or a2 != a3)
    print "***", nfailed
#编码:ascii
从集合导入defaultdict
进口稀土
_调试=0
#孙武和乌迪·曼伯的“带错误的快速文本搜索”
亚利桑那大学计算机科学系第91-11页
# coding: ascii

from collections import defaultdict
import re

_DEBUG = 0


# "Fast Text Searching with Errors" by Sun Wu and Udi Manber
# TR 91-11, Dept of Computer Science, University of Arizona, June 1991.
# http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.8854

def WM_approx_Ham1_search(pattern, text):
    """Generate (Hamming_dist, start_offset)
    for matches with distance 0 or 1"""
    m = len(pattern)
    S_table = defaultdict(int)
    for i, c in enumerate(pattern):
        S_table[c] |= 1 << i
    R0 = 0
    R1 = 0
    mask = 1 << (m - 1)
    for j, c in enumerate(text):
        S = S_table[c]
        shR0 = (R0 << 1) | 1
        R0 = shR0 & S
        R1 = ((R1 << 1) | 1) & S | shR0
        if _DEBUG:
            print "j= %2d msk=%s S=%s R0=%s R1=%s" \
                % tuple([j] + map(bitstr, [mask, S, R0, R1]))
        if R0 & mask: # exact match
            yield 0, j - m + 1
        elif R1 & mask: # match with one substitution
            yield 1, j - m + 1

if _DEBUG:

    def bitstr(num, mlen=8):
       wstr = ""
       for i in xrange(mlen):
          if num & 1:
             wstr = "1" + wstr
          else:
             wstr = "0" + wstr
          num >>= 1
       return wstr

def Ham_dist(s1, s2):
    """Calculate Hamming distance between 2 sequences."""
    assert len(s1) == len(s2)
    return sum(c1 != c2 for c1, c2 in zip(s1, s2))

def long_check(pattern, text):
    """Naively and understandably generate (Hamming_dist, start_offset)
    for matches with distance 0 or 1"""
    m = len(pattern)
    for i in xrange(len(text) - m + 1):
        d = Ham_dist(pattern, text[i:i+m])
        if d < 2:
            yield d, i

def Paul_McGuire_regex(pattern, text):
    searchSeqREStr = (
        '('
        + pattern
        + ')|('
        + ')|('.join(
            pattern[:i]
            + "[ACTGN]".replace(c,'')
            + pattern[i+1:]
            for i,c in enumerate(pattern)
            )
        + ')'
        )
    searchSeqRE = re.compile(searchSeqREStr)
    for match in searchSeqRE.finditer(text):
        locn = match.start()
        dist = int(bool(match.lastindex - 1))
        yield dist, locn


if __name__ == "__main__":

    genome1 = "TTTACGTAAACTAAACTGTAA"
    #         01234567890123456789012345
    #                   1         2

    tests = [
        (genome1, "ACGT ATGT ACTA ATCG TTTT ATTA TTTA"),
        ("T" * 10, "TTTT"),
        ("ACGTCGTAAAA", "TCGT"), # partial match can shadow an exact match
        ]

    nfailed = 0
    for genome, patterns in tests:
        print "genome:", genome
        for pattern in patterns.split():
            print pattern
            a1 = list(WM_approx_Ham1_search(pattern, genome))
            a2 = list(long_check(pattern, genome))
            a3 = list(Paul_McGuire_regex(pattern, genome))
            print a1
            print a2
            print a3
            print a1 == a2, a2 == a3
            nfailed += (a1 != a2 or a2 != a3)
    print "***", nfailed
def searchnomismatch(node,genome,i):
    if i == 25:
        addtomatches(node)
    else:
        for x in range(4):
            if node[x]:
                if x == genome[i]:
                    searchnomismatch(node[x],genome,i+1)
                else:
                    searchmismatch(node[x],genome,i+1,i)

def searchmismatch(node,genome,i,where):
    if i == 25:
        addtomatches(node,where)
    else:
        if node[genome[i]]:
            searchmismatch(node[genome[i]],genome,i+1,where)
searchnomismatch(trie,genome[ind:ind+25],0)
def addtomatches(id,where=-1):
    matches.append(id,where)
fname = "TgondiiGenomic_ToxoDB-6.0.fasta"
fastasrc = open(fname).read()   # yes! just read the whole dang 158Mb!

"""
Sample header:
>gb|scf_1104442823584 | organism=Toxoplasma_gondii_VEG | version=2008-07-23 | length=1448
"""
integer = Word(nums).setParseAction(lambda t:int(t[0]))
genebit = Group(">gb|" + Word(printables)("id") + SkipTo("length=") + 
                "length=" + integer("genelen") + LineEnd() + 
                Combine(OneOrMore(Word("ACGTN")),adjacent=False)("gene"))

# read gene data from .fasta file - takes just under a couple of minutes
genedata = OneOrMore(genebit).parseString(fastasrc)
class CloseMatch(Token):
    def __init__(self, seq, maxMismatches=1):
        super(CloseMatch,self).__init__()
        self.name = seq
        self.sequence = seq
        self.maxMismatches = maxMismatches
        self.errmsg = "Expected " + self.sequence
        self.mayIndexError = False
        self.mayReturnEmpty = False

    def parseImpl( self, instring, loc, doActions=True ):
        start = loc
        instrlen = len(instring)
        maxloc = start + len(self.sequence)

        if maxloc <= instrlen:
            seq = self.sequence
            seqloc = 0
            mismatches = []
            throwException = False
            done = False
            while loc < maxloc and not done:
                if instring[loc] != seq[seqloc]:
                    mismatches.append(seqloc)
                    if len(mismatches) > self.maxMismatches:
                        throwException = True
                        done = True
                loc += 1
                seqloc += 1
        else:
            throwException = True

        if throwException:
            exc = self.myException
            exc.loc = loc
            exc.pstr = instring
            raise exc

        return loc, (instring[start:loc],mismatches)
searchseq = CloseMatch("ATCATCGAATGGAATCTAATGGAAT", 2)
for g in genedata:
    print "%s (%d)" % (g.id, g.genelen)
    print "-"*24
    for t,startLoc,endLoc in searchseq.scanString(g.gene):
        matched, mismatches = t[0]
        print "MATCH:", searchseq.sequence
        print "FOUND:", matched
        if mismatches:
            print "      ", ''.join(' ' if i not in mismatches else '*' 
                            for i,c in enumerate(searchseq.sequence))
        else:
            print "<exact match>"
        print "at location", startLoc
        print
    print
scf_1104442825154 (964)
------------------------

scf_1104442822828 (942)
------------------------

scf_1104442824510 (987)
------------------------

scf_1104442823180 (1065)
------------------------
...
scf_1104442823952 (1188)
------------------------
MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAACGGAATCGAATGGAAT
                *      *        
at location 33

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCGAATGGAAT
                       *        
at location 175

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCGAATGGAAT
                       *        
at location 474

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCGAATGGAAT
                       *        
at location 617

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCGAATAGAAT
                       *   *    
at location 718

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGATTCGAATGGAAT
                    *  *        
at location 896

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCGAATGGTAT
                       *     *  
at location 945
scf_1104442823584 (1448)
------------------------
MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGACTCGAATGGAAT
                    *  *        
at location 177

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCAAATGGAAT
                       *        
at location 203

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCAAATGGAATCGAATGGAAT
             *         *        
at location 350

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCGAATGGAAA
                       *       *
at location 523

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCAAATGGAATCGAATGGAAT
             *         *        
at location 822

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCATCGAATGGAATCTAATGGAAT
<exact match>
at location 848

MATCH: ATCATCGAATGGAATCTAATGGAAT
FOUND: ATCGTCGAATGGAGTCTAATGGAAT
          *         *           
at location 969
import re
seqStr = "ATCATCGAATGGAATCTAATGGAAT"
searchSeqREStr = seqStr + '|' + \
    '|'.join(seqStr[:i]+"[ACTGN]".replace(c,'') +seqStr[i+1:] 
             for i,c in enumerate(seqStr))

searchSeqRE = re.compile(searchSeqREStr)

for g in genedata:
    print "%s (%d)" % (g.id, g.genelen)
    print "-"*24
    for match in searchSeqRE.finditer(g.gene):
        print "MATCH:", seqStr
        print "FOUND:", match.group(0)
        print "at location", match.start()
        print
    print
import tre

pt = tre.compile("Don(ald)?( Ervin)? Knuth", tre.EXTENDED)
data = """
In addition to fundamental contributions in several branches of
theoretical computer science, Donnald Erwin Kuth is the creator of
the TeX computer typesetting system, the related METAFONT font
definition language and rendering system, and the Computer Modern
family of typefaces.
"""

fz = tre.Fuzzyness(maxerr = 3)
print fz
m = pt.search(data, fz)

if m:
    print m.groups()
    print m[0]
tre.Fuzzyness(delcost=1,inscost=1,maxcost=2147483647,subcost=1, maxdel=2147483647,maxerr=3,maxins=2147483647,maxsub=2147483647)
((95, 113), (99, 108), (102, 108))
Donnald Erwin Kuth
import regex
m=regex.findall("AA", "CAG")
>>> []
m=regex.findall("(AA){e<=1}", "CAAG") # means allow up to 1 error
m
>>> ['CA', 'AG']
in_pattern = "";
in_genome = "";
in_mistake = d;
out_result = ""


kmer = len(in_pattern)

def FindMistake(v):
    mistake = 0
    for i in range(0, kmer, 1):
        if (v[i]!=in_pattern[i]):
            mistake+=1
        if mistake>in_mistake:
            return False
    return True


for i in xrange(len(in_genome)-kmer+1):
    v = in_genome[i:i+kmer]
    if FindMistake(v):
        out_result+= str(i) + " "

print out_result