实施"基本法"的问题;波坍缩函数“;Python中的算法

实施"基本法"的问题;波坍缩函数“;Python中的算法,python,algorithm,markov-chains,procedural-generation,Python,Algorithm,Markov Chains,Procedural Generation,简而言之: 我在Python2.7中的实现存在缺陷,但我无法确定问题所在。我需要帮助来找出我可能遗漏了什么或做错了什么 什么是波折叠函数算法? 它是Maxim Gumin于2016年编写的一种算法,可以从样本图像生成程序模式。您可以在动作(二维重叠模型)和(三维平铺模型)中看到它 此实施的目标: 将算法(2D重叠模型)归结为其本质,并避免算法的重复性和笨拙性(令人惊讶的长且难以阅读)。这是一个尝试,使一个更短,更清晰和python版本的算法 if not inte

简而言之:

我在Python2.7中的实现存在缺陷,但我无法确定问题所在。我需要帮助来找出我可能遗漏了什么或做错了什么

什么是波折叠函数算法?

它是Maxim Gumin于2016年编写的一种算法,可以从样本图像生成程序模式。您可以在动作(二维重叠模型)和(三维平铺模型)中看到它

此实施的目标:

将算法(2D重叠模型)归结为其本质,并避免算法的重复性和笨拙性(令人惊讶的长且难以阅读)。这是一个尝试,使一个更短,更清晰和python版本的算法

                if not intersection:
                    print 'contradiction'
                    noLoop()
此实现的特征:

我正在使用(Python模式),这是一种用于视觉设计的软件,可以使图像处理更容易(没有PIL,没有Matplotlib,…)。主要缺点是我仅限于Python 2.7,无法导入numpy

与原始版本不同,此实现:

  • 不是面向对象的(处于当前状态),因此更容易理解/更接近伪代码
  • 正在使用一维阵列而不是二维阵列
  • 正在使用数组切片进行矩阵操作
算法(据我所知)

1/读取输入位图,存储每个NxN模式,并计算它们的出现次数。 (可选:使用旋转和反射增强阵列数据。)

例如,当N=3时:

2/预先计算并存储模式之间所有可能的邻接关系。 在下面的示例中,模式207、242、182和125可以与模式246的右侧重叠

3/创建一个具有输出维度的数组(称为wave的
W
)。该数组的每个元素都是一个数组,其中包含每个模式的状态(
True
,of
False

例如,假设我们在输入中计算了326个唯一的模式,我们希望我们的输出是20乘20(400个单元)的尺寸。然后“Wave”数组将包含400(20x20)个数组,每个数组包含326个boolan值

在开始时,所有布尔值都被设置为
True
,因为每个模式都允许位于波形的任何位置

W = [[True for pattern in xrange(len(patterns))] for cell in xrange(20*20)]
4/使用输出的维度创建另一个数组(称为
H
)。该数组的每个元素都是一个浮点,在输出中保存其相应单元格的“熵”值

此处的熵指的是波中特定位置的有效模式数,并根据其计算。一个单元格的有效模式越多(在波形中设置为
True
),其熵就越高

例如,为了计算单元格22的熵,我们查看其在波浪中的相应指数(
W[22]
),并计算设置为
True
的布尔数。有了这个计数,我们现在可以用香农公式计算熵了。然后,此计算的结果将存储在同一索引的H中
H[22]

开始时,所有单元格都有相同的熵值(在
H
中的每个位置都有相同的浮点值),因为每个单元格的所有模式都设置为
True

H = [entropyValue for cell in xrange(20*20)]
这4个步骤是介绍性步骤,它们是初始化算法所必需的。现在开始算法的核心

5/观察:

最小值非零熵查找单元格的索引(请注意,在第一次迭代时,所有熵都相等,因此我们需要随机选取单元格的索引。)

然后,查看波形中相应索引处的仍然有效的模式,并随机选择其中一个,根据模式在输入图像中出现的频率进行加权(加权选择)

例如,如果
H
中的最低值位于索引22(
H[22]
),我们查看
W[22]
处设置为
True
的所有模式,并根据其在输入中出现的次数随机选择一个模式。(请记住,在步骤1中,我们计算了每个模式的发生次数)。这确保了模式在输出中的分布与在输入中的分布相似

6/折叠:

现在,我们将所选模式的索引分配给具有最小熵的单元。这意味着波中相应位置的每个图案都设置为
False
,已选择的图案除外

例如,如果
W[22]
中的模式
246
被设置为
True
,并且已被选中,则所有其他模式都被设置为
False
。单元格
22
分配了模式
246
。 输出单元格22中的将填充图案246的第一种颜色(左上角)。(本例中为蓝色)

7/传播:

由于邻接约束,该模式选择会对波中的相邻单元产生影响。与最近折叠的单元格的左侧和右侧、顶部和上方的单元格对应的布尔数组需要相应地更新

from collections import Counter
from itertools import chain, izip
import math

d = 20  # dimensions of output (array of dxd cells)
N = 3 # dimensions of a pattern (NxN matrix)

Output = [120 for i in xrange(d*d)] # array holding the color value for each cell in the output (at start each cell is grey = 120)

def setup():
    size(800, 800, P2D)
    textSize(11)

    global W, H, A, freqs, patterns, directions, xs, ys, npat

    img = loadImage('Flowers.png') # path to the input image
    iw, ih = img.width, img.height # dimensions of input image
    xs, ys = width//d, height//d # dimensions of cells (squares) in output
    kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)] # NxN matrix to read every patterns contained in input image
    directions = [(-1, 0), (1, 0), (0, -1), (0, 1)] # (x, y) tuples to access the 4 neighboring cells of a collapsed cell
    all = [] # array list to store all the patterns found in input



    # Stores the different patterns found in input
    for y in xrange(ih):
        for x in xrange(iw):

            ''' The one-liner below (cmat) creates a NxN matrix with (x, y) being its top left corner.
                This matrix will wrap around the edges of the input image.
                The whole snippet reads every NxN part of the input image and store the associated colors.
                Each NxN part is called a 'pattern' (of colors). Each pattern can be rotated or flipped (not mandatory). '''


            cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel]

            # Storing rotated patterns (90°, 180°, 270°, 360°) 
            for r in xrange(4):
                cmat = zip(*cmat[::-1]) # +90° rotation
                all.append(cmat) 

            # Storing reflected patterns (vertical/horizontal flip)
            all.append(cmat[::-1])
            all.append([a[::-1] for a in cmat])




    # Flatten pattern matrices + count occurences 

    ''' Once every pattern has been stored,
        - we flatten them (convert to 1D) for convenience
        - count the number of occurences for each one of them (one pattern can be found multiple times in input)
        - select unique patterns only
        - store them from less common to most common (needed for weighted choice)'''

    all = [tuple(chain.from_iterable(p)) for p in all] # flattern pattern matrices (NxN --> [])
    c = Counter(all)
    freqs = sorted(c.values()) # number of occurences for each unique pattern, in sorted order
    npat = len(freqs) # number of unique patterns
    total = sum(freqs) # sum of frequencies of unique patterns
    patterns = [p[0] for p in c.most_common()[:-npat-1:-1]] # list of unique patterns sorted from less common to most common



    # Computes entropy

    ''' The entropy of a cell is correlated to the number of possible patterns that cell holds.
        The more a cell has valid patterns (set to 'True'), the higher its entropy is.
        At start, every pattern is set to 'True' for each cell. So each cell holds the same high entropy value'''

    ent = math.log(total) - sum(map(lambda x: x * math.log(x), freqs)) / total



    # Initializes the 'wave' (W), entropy (H) and adjacencies (A) array lists

    W = [[True for _ in xrange(npat)] for i in xrange(d*d)] # every pattern is set to 'True' at start, for each cell
    H = [ent for i in xrange(d*d)] # same entropy for each cell at start (every pattern is valid)
    A = [[set() for dir in xrange(len(directions))] for i in xrange(npat)] #see below for explanation




    # Compute patterns compatibilities (check if some patterns are adjacent, if so -> store them based on their location)

    ''' EXAMPLE:
    If pattern index 42 can placed to the right of pattern index 120,
    we will store this adjacency rule as follow:

                     A[120][1].add(42)

    Here '1' stands for 'right' or 'East'/'E'

    0 = left or West/W
    1 = right or East/E
    2 = up or North/N
    3 = down or South/S '''

    # Comparing patterns to each other
    for i1 in xrange(npat):
        for i2 in xrange(npat):
            for dir in (0, 2):
                if compatible(patterns[i1], patterns[i2], dir):
                    A[i1][dir].add(i2)
                    A[i2][dir+1].add(i1)


def compatible(p1, p2, dir):

    '''NOTE: 
    what is refered as 'columns' and 'rows' here below is not really columns and rows 
    since we are dealing with 1D patterns. Remember here N = 3'''

    # If the first two columns of pattern 1 == the last two columns of pattern 2 
    # --> pattern 2 can be placed to the left (0) of pattern 1
    if dir == 0:
        return [n for i, n in enumerate(p1) if i%N!=2] == [n for i, n in enumerate(p2) if i%N!=0]

    # If the first two rows of pattern 1 == the last two rows of pattern 2
    # --> pattern 2 can be placed on top (2) of pattern 1
    if dir == 2:
        return p1[:6] == p2[-6:]



def draw():    # Equivalent of a 'while' loop in Processing (all the code below will be looped over and over until all cells are collapsed)
    global H, W, grid

    ### OBSERVATION
    # Find cell with minimum non-zero entropy (not collapsed yet)

    '''Randomly select 1 cell at the first iteration (when all entropies are equal), 
       otherwise select cell with minimum non-zero entropy'''

    emin = int(random(d*d)) if frameCount <= 1 else H.index(min(H)) 



    # Stoping mechanism

    ''' When 'H' array is full of 'collapsed' cells --> stop iteration '''

    if H[emin] == 'CONT' or H[emin] == 'collapsed': 
        print 'stopped'
        noLoop()
        return



    ### COLLAPSE
    # Weighted choice of a pattern

    ''' Among the patterns available in the selected cell (the one with min entropy), 
        select one pattern randomly, weighted by the frequency that pattern appears in the input image.
        With Python 2.7 no possibility to use random.choice(x, weight) so we have to hard code the weighted choice '''

    lfreqs = [b * freqs[i] for i, b in enumerate(W[emin])] # frequencies of the patterns available in the selected cell
    weights = [float(f) / sum(lfreqs) for f in lfreqs] # normalizing these frequencies
    cumsum = [sum(weights[:i]) for i in xrange(1, len(weights)+1)] # cumulative sums of normalized frequencies
    r = random(1)
    idP = sum([cs < r for cs in cumsum])  # index of selected pattern 

    # Set all patterns to False except for the one that has been chosen   
    W[emin] = [0 if i != idP else 1 for i, b in enumerate(W[emin])]

    # Marking selected cell as 'collapsed' in H (array of entropies)
    H[emin] = 'collapsed' 

    # Storing first color (top left corner) of the selected pattern at the location of the collapsed cell
    Output[emin] = patterns[idP][0]



    ### PROPAGATION
    # For each neighbor (left, right, up, down) of the recently collapsed cell
    for dir, t in enumerate(directions):
        x = (emin%d + t[0])%d
        y = (emin/d + t[1])%d
        idN = x + y * d #index of neighbor

        # If that neighbor hasn't been collapsed yet
        if H[idN] != 'collapsed': 

            # Check indices of all available patterns in that neighboring cell
            available = [i for i, b in enumerate(W[idN]) if b]

            # Among these indices, select indices of patterns that can be adjacent to the collapsed cell at this location
            intersection = A[idP][dir] & set(available) 

            # If the neighboring cell contains indices of patterns that can be adjacent to the collapsed cell
            if intersection:

                # Remove indices of all other patterns that cannot be adjacent to the collapsed cell
                W[idN] = [True if i in list(intersection) else False for i in xrange(npat)]


                ### Update entropy of that neighboring cell accordingly (less patterns = lower entropy)

                # If only 1 pattern available left, no need to compute entropy because entropy is necessarily 0
                if len(intersection) == 1: 
                    H[idN] = '0' # Putting a str at this location in 'H' (array of entropies) so that it doesn't return 0 (float) when looking for minimum entropy (min(H)) at next iteration


                # If more than 1 pattern available left --> compute/update entropy + add noise (to prevent cells to share the same minimum entropy value)
                else:
                    lfreqs = [b * f for b, f in izip(W[idN], freqs) if b] 
                    ent = math.log(sum(lfreqs)) - sum(map(lambda x: x * math.log(x), lfreqs)) / sum(lfreqs)
                    H[idN] = ent + random(.001)


            # If no index of adjacent pattern in the list of pattern indices of the neighboring cell
            # --> mark cell as a 'contradiction'
            else:
                H[idN] = 'CONT'



    # Draw output

    ''' dxd grid of cells (squares) filled with their corresponding color.      
        That color is the first (top-left) color of the pattern assigned to that cell '''

    for i, c in enumerate(Output):
        x, y = i%d, i/d
        fill(c)
        rect(x * xs, y * ys, xs, ys)

        # Displaying corresponding entropy value
        fill(0)
        text(H[i], x * xs + xs/2 - 12, y * ys + ys/2)
例如,如果单元格
22
已折叠并分配了模式
246
,则必须修改
W[21]
(左)、
W[23]
(右)、
W[2]
(向上)和
W[42]
(向下),以便它们仅保持与模式
246
相邻的模式

例如,回顾步骤2的图片,我们可以看到只有模式207、242、182和125可以放置在模式246的右侧。这意味着
W[23]
(单元格的右侧
22
)需要保留模式20
while stack:
while stack:
    idC = stack.pop() # index of current cell
    for dir, t in enumerate(mat):
        x = (idC%w + t[0])%w
        y = (idC/w + t[1])%h
        idN = x + y * w  # index of neighboring cell
        if H[idN] != 'c': 
            possible = set([n for idP in W[idC] for n in A[idP][dir]])
            available = W[idN]
            if not available.issubset(possible):
                intersection = possible & available
                if not intersection:
                    print 'contradiction'
                    noLoop()
                W[idN] = intersection
                lfreqs = [freqs[i] for i in W[idN]]
                H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001)
                stack.add(idN)
from collections import Counter
from itertools import chain
from random import choice

w, h = 40, 25
N = 3

def setup():
    size(w*20, h*20, P2D)
    background('#FFFFFF')
    frameRate(1000)
    noStroke()

    global W, A, H, patterns, freqs, npat, mat, xs, ys

    img = loadImage('Flowers.png') 
    iw, ih = img.width, img.height
    xs, ys = width//w, height//h
    kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)]
    mat = ((-1, 0), (1, 0), (0, -1), (0, 1))
    all = []

    for y in xrange(ih):
        for x in xrange(iw):
            cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel]
            for r in xrange(4):
                cmat = zip(*cmat[::-1])
                all.append(cmat)
                all.append(cmat[::-1])
                all.append([a[::-1] for a in cmat])

    all = [tuple(chain.from_iterable(p)) for p in all] 
    c = Counter(all)
    patterns = c.keys()
    freqs = c.values()
    npat = len(freqs) 

    W = [set(range(npat)) for i in xrange(w*h)] 
    A = [[set() for dir in xrange(len(mat))] for i in xrange(npat)]
    H = [100 for i in xrange(w*h)] 

    for i1 in xrange(npat):
        for i2 in xrange(npat):
            if [n for i, n in enumerate(patterns[i1]) if i%N!=(N-1)] == [n for i, n in enumerate(patterns[i2]) if i%N!=0]:
                A[i1][0].add(i2)
                A[i2][1].add(i1)
            if patterns[i1][:(N*N)-N] == patterns[i2][N:]:
                A[i1][2].add(i2)
                A[i2][3].add(i1)


def draw():    
    global H, W

    emin = int(random(w*h)) if frameCount <= 1 else H.index(min(H)) 

    if H[emin] == 'c': 
        print 'finished'
        noLoop()

    id = choice([idP for idP in W[emin] for i in xrange(freqs[idP])])
    W[emin] = [id]
    H[emin] = 'c' 

    stack = set([emin])
    while stack:
        idC = stack.pop() 
        for dir, t in enumerate(mat):
            x = (idC%w + t[0])%w
            y = (idC/w + t[1])%h
            idN = x + y * w 
            if H[idN] != 'c': 
                possible = set([n for idP in W[idC] for n in A[idP][dir]])
                if not W[idN].issubset(possible):
                    intersection = possible & W[idN] 
                    if not intersection:
                        print 'contradiction'
                        noLoop()
                        return

                    W[idN] = intersection
                    lfreqs = [freqs[i] for i in W[idN]]
                    H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001)
                    stack.add(idN)

    fill(patterns[id][0])
    rect((emin%w) * xs, (emin/w) * ys, xs, ys)