Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/matlab/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/sharepoint/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Matlab中用前馈网络模拟默认模式网络?_Matlab_Neural Network - Fatal编程技术网

在Matlab中用前馈网络模拟默认模式网络?

在Matlab中用前馈网络模拟默认模式网络?,matlab,neural-network,Matlab,Neural Network,通过以下网络,我获得了非常不同的培训效率 net = patternnet(hiddenLayerSize); 接下来呢 net = feedforwardnet(hiddenLayerSize, 'trainscg'); net.layers{1}.transferFcn = 'tansig'; net.layers{2}.transferFcn = 'softmax'; net.performFcn = 'crossentropy'; 在相同的数据上 我在想网络应该是一样的 hidden

通过以下网络,我获得了非常不同的培训效率

net = patternnet(hiddenLayerSize);
接下来呢

net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';
在相同的数据上

我在想网络应该是一样的

hiddenLayerSize = 10;

% pass 1, with patternnet
net = patternnet(hiddenLayerSize);

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 1, patternnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 2, with feedforwardnet
net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 2, feedforwardnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 1, with patternnet
net = patternnet(hiddenLayerSize);

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 3, patternnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 2, with feedforwardnet
net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 4, feedforwardnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);
我忘了什么

更新

下面的代码演示了,网络行为的唯一性取决于网络创建函数

每种类型的网络运行两次。这不包括随机生成器问题或其他问题。数据是一样的

hiddenLayerSize = 10;

% pass 1, with patternnet
net = patternnet(hiddenLayerSize);

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 1, patternnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 2, with feedforwardnet
net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 2, feedforwardnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 1, with patternnet
net = patternnet(hiddenLayerSize);

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 3, patternnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 2, with feedforwardnet
net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 4, feedforwardnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);
产出如下:

pass 1, patternnet, performance: 0.116445
num_epochs: 353, stop: Validation stop.
pass 2, feedforwardnet, performance: 0.693561
num_epochs: 260, stop: Validation stop.
pass 3, patternnet, performance: 0.116445
num_epochs: 353, stop: Validation stop.
pass 4, feedforwardnet, performance: 0.693561
num_epochs: 260, stop: Validation stop.

通常情况下,网络并不是以完全相同的方式进行每项训练。这取决于三个(我的意思是我知道三个)原因:

  • 神经网络的初始初始化
  • 数据标准化
  • 数据缩放
    如果说(1),网络最初配置为随机权重,在一些小范围内具有不同的符号。例如,具有6个输入的神经元可以获得如下初始权重:0.1、-0.3、0.16、-0.23、0.015、-0.0005。这可能会带来一点点新的训练结果。若要说(2),若你们的标准化执行得很差,那个么学习算法会收敛到局部极小值,并且不能跳出它。同样的情况也适用于案例(3),如果您的数据需要缩放,而您没有做到这一点

  • 看起来这两个不太一样:

    >> net = patternnet(hiddenLayerSize);
    >> net2 = feedforwardnet(hiddenLayerSize,'trainscg');
    >> net.outputs{2}.processParams{2}
    
    ans =
    
        ymin: 0
        ymax: 1
    
    >> net2.outputs{2}.processParams{2}
    
    ans =
    
        ymin: -1
        ymax: 1
    
    net.outputs{2}.processFcns{2}
    mapminmax
    ,因此我推测其中之一是重新调整其输出,以便更好地匹配实际数据的输出范围

    作为将来的参考,您可以通过强制转换到struct来比较内部数据结构。所以我做了一些类似的事情

    n = struct(net); n2 = struct(net2);
    for fn=fieldnames(n)';
      if(~isequaln(n.(fn{1}),n2.(fn{1})))
        fprintf('fields %s differ\n', fn{1});
      end
    end
    

    帮助找出差异。

    请查看我的更新。1-3不可能是一个原因,因为结果会在多次运行中重现:
    patternnet
    系统性能优于(显然)相同的
    feedforwardnet
    。因此,原因可能是(可能)我以不同的方式初始化
    feedforwardnet
    ,问题是:有什么区别。和。feedforwardnet更具通用性,更适用于近似函数,而patternnet更适用于模式识别。如果您的任务数据更适合patternnet,则patternnet的性能会更好,如果您的任务数据更适合feedforwardnet,然后,前馈网络将执行得更好。
    patternnet
    调用
    feedforwardnet
    内部和两个输出
    network
    对象。问题是:两种创作方式的实际区别是什么。我试图通过设置适当的传输和性能函数来复制它,但显然失败了。问题是:我忘了做什么?数据是用于分类的,但这并不重要。两个调用的数据相同。目标是使
    feedforwardnet
    patternnet
    的工作方式相同。