Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/315.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
语料库中每个文本的平均句子长度(python3和nltk)_Python_Nlp_Nltk_Average_Iterable - Fatal编程技术网

语料库中每个文本的平均句子长度(python3和nltk)

语料库中每个文本的平均句子长度(python3和nltk),python,nlp,nltk,average,iterable,Python,Nlp,Nltk,Average,Iterable,作为python编程入门课程的一部分,我正在分析NLTK包中的就职演说语料库。我想找出语料库中每个文本的平均句子长度(以便以后比较),但我似乎被困在这里了 我创建了这个函数: def averageSentence(text): sents = inaugural.sents(fileids=['fileid_here.txt'] avg = sum(len(word) for word in sents) / len(sents) print(avg) 如果我

作为python编程入门课程的一部分,我正在分析NLTK包中的就职演说语料库。我想找出语料库中每个文本的平均句子长度(以便以后比较),但我似乎被困在这里了

我创建了这个函数:

def averageSentence(text):
    sents = inaugural.sents(fileids=['fileid_here.txt']  
    avg = sum(len(word) for word in sents) / len(sents)  
    print(avg)
如果我是对的话,这应该给我一篇文章的平均句子长度。现在,我知道我需要一个for循环。难道我不能用我刚刚定义的这个函数来创建一个相对简单直接的for循环吗?这是非常令人沮丧的

编辑:这是我取得的成绩:

for fileid in inaugural.fileids():
    avg_sents = averageSentence(fileid)
    print = sum(avg_sents) / avg_sents
尝试:

请注意,当分母足够大时+1实际上并不重要


Mirco平均所有文本的句子长度

以下代码是一行代码,但不鼓励使用,因为您可能已经实现了两次生成器:

>>> sum(len(sent) for sent in inaugural.sents()) / len(inaugural.sents())
29.9373459326212

马可平均了所有文本的句子长度

>>> sum(sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) / len(inaugural.sents(fileids=[fileid])) for fileid in inaugural.fileids()) / len(inaugural.fileids())
32.84054349411484
>>> from __future__ import division
>>> from nltk.corpus import inaugural
>>> inaugural.fileids()
[u'1789-Washington.txt', u'1793-Washington.txt', u'1797-Adams.txt', u'1801-Jefferson.txt', u'1805-Jefferson.txt', u'1809-Madison.txt', u'1813-Madison.txt', u'1817-Monroe.txt', u'1821-Monroe.txt', u'1825-Adams.txt', u'1829-Jackson.txt', u'1833-Jackson.txt', u'1837-VanBuren.txt', u'1841-Harrison.txt', u'1845-Polk.txt', u'1849-Taylor.txt', u'1853-Pierce.txt', u'1857-Buchanan.txt', u'1861-Lincoln.txt', u'1865-Lincoln.txt', u'1869-Grant.txt', u'1873-Grant.txt', u'1877-Hayes.txt', u'1881-Garfield.txt', u'1885-Cleveland.txt', u'1889-Harrison.txt', u'1893-Cleveland.txt', u'1897-McKinley.txt', u'1901-McKinley.txt', u'1905-Roosevelt.txt', u'1909-Taft.txt', u'1913-Wilson.txt', u'1917-Wilson.txt', u'1921-Harding.txt', u'1925-Coolidge.txt', u'1929-Hoover.txt', u'1933-Roosevelt.txt', u'1937-Roosevelt.txt', u'1941-Roosevelt.txt', u'1945-Roosevelt.txt', u'1949-Truman.txt', u'1953-Eisenhower.txt', u'1957-Eisenhower.txt', u'1961-Kennedy.txt', u'1965-Johnson.txt', u'1969-Nixon.txt', u'1973-Nixon.txt', u'1977-Carter.txt', u'1981-Reagan.txt', u'1985-Reagan.txt', u'1989-Bush.txt', u'1993-Clinton.txt', u'1997-Clinton.txt', u'2001-Bush.txt', u'2005-Bush.txt', u'2009-Obama.txt']
>>> for fileid in inaugural.fileids():
...     avg = sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) / len(inaugural.sents(fileids=[fileid]))
...     print fileid, avg
... 
1789-Washington.txt 64.0833333333
1793-Washington.txt 36.75
1797-Adams.txt 69.8648648649
1801-Jefferson.txt 46.0714285714
1805-Jefferson.txt 52.9777777778
1809-Madison.txt 60.2380952381
1813-Madison.txt 39.5151515152
1817-Monroe.txt 30.2704918033
1821-Monroe.txt 38.0542635659
1825-Adams.txt 42.5675675676
1829-Jackson.txt 48.32
1833-Jackson.txt 42.2333333333
1837-VanBuren.txt 43.9052631579
1841-Harrison.txt 43.6428571429
1845-Polk.txt 33.9607843137
1849-Taylor.txt 53.7272727273
1853-Pierce.txt 35.1634615385
1857-Buchanan.txt 34.808988764
1861-Lincoln.txt 29.0217391304
1865-Lincoln.txt 29.0740740741
1869-Grant.txt 30.2195121951
1873-Grant.txt 33.5909090909
1877-Hayes.txt 46.1694915254
1881-Garfield.txt 28.9196428571
1885-Cleveland.txt 41.5454545455
1889-Harrison.txt 30.2547770701
1893-Cleveland.txt 37.1206896552
1897-McKinley.txt 33.6230769231
1901-McKinley.txt 24.5
1905-Roosevelt.txt 33.0606060606
1909-Taft.txt 36.7672955975
1913-Wilson.txt 28.0147058824
1917-Wilson.txt 27.6
1921-Harding.txt 25.2080536913
1925-Coolidge.txt 22.5482233503
1929-Hoover.txt 24.6202531646
1933-Roosevelt.txt 24.2705882353
1937-Roosevelt.txt 21.03125
1941-Roosevelt.txt 22.5882352941
1945-Roosevelt.txt 24.5
1949-Truman.txt 21.7931034483
1953-Eisenhower.txt 22.5609756098
1957-Eisenhower.txt 20.8369565217
1961-Kennedy.txt 29.7307692308
1965-Johnson.txt 18.2446808511
1969-Nixon.txt 22.8773584906
1973-Nixon.txt 29.3913043478
1977-Carter.txt 26.0377358491
1981-Reagan.txt 22.0551181102
1985-Reagan.txt 23.380952381
1989-Bush.txt 18.7103448276
1993-Clinton.txt 22.9012345679
1997-Clinton.txt 21.9821428571
2001-Bush.txt 18.8144329897
2005-Bush.txt 25.0105263158
2009-Obama.txt 24.3392857143
>>> sum([sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) for fileid in inaugural.fileids()]) / len(inaugural.fileids())
2602.410714285714

平均每篇文章的句子长度

>>> sum(sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) / len(inaugural.sents(fileids=[fileid])) for fileid in inaugural.fileids()) / len(inaugural.fileids())
32.84054349411484
>>> from __future__ import division
>>> from nltk.corpus import inaugural
>>> inaugural.fileids()
[u'1789-Washington.txt', u'1793-Washington.txt', u'1797-Adams.txt', u'1801-Jefferson.txt', u'1805-Jefferson.txt', u'1809-Madison.txt', u'1813-Madison.txt', u'1817-Monroe.txt', u'1821-Monroe.txt', u'1825-Adams.txt', u'1829-Jackson.txt', u'1833-Jackson.txt', u'1837-VanBuren.txt', u'1841-Harrison.txt', u'1845-Polk.txt', u'1849-Taylor.txt', u'1853-Pierce.txt', u'1857-Buchanan.txt', u'1861-Lincoln.txt', u'1865-Lincoln.txt', u'1869-Grant.txt', u'1873-Grant.txt', u'1877-Hayes.txt', u'1881-Garfield.txt', u'1885-Cleveland.txt', u'1889-Harrison.txt', u'1893-Cleveland.txt', u'1897-McKinley.txt', u'1901-McKinley.txt', u'1905-Roosevelt.txt', u'1909-Taft.txt', u'1913-Wilson.txt', u'1917-Wilson.txt', u'1921-Harding.txt', u'1925-Coolidge.txt', u'1929-Hoover.txt', u'1933-Roosevelt.txt', u'1937-Roosevelt.txt', u'1941-Roosevelt.txt', u'1945-Roosevelt.txt', u'1949-Truman.txt', u'1953-Eisenhower.txt', u'1957-Eisenhower.txt', u'1961-Kennedy.txt', u'1965-Johnson.txt', u'1969-Nixon.txt', u'1973-Nixon.txt', u'1977-Carter.txt', u'1981-Reagan.txt', u'1985-Reagan.txt', u'1989-Bush.txt', u'1993-Clinton.txt', u'1997-Clinton.txt', u'2001-Bush.txt', u'2005-Bush.txt', u'2009-Obama.txt']
>>> for fileid in inaugural.fileids():
...     avg = sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) / len(inaugural.sents(fileids=[fileid]))
...     print fileid, avg
... 
1789-Washington.txt 64.0833333333
1793-Washington.txt 36.75
1797-Adams.txt 69.8648648649
1801-Jefferson.txt 46.0714285714
1805-Jefferson.txt 52.9777777778
1809-Madison.txt 60.2380952381
1813-Madison.txt 39.5151515152
1817-Monroe.txt 30.2704918033
1821-Monroe.txt 38.0542635659
1825-Adams.txt 42.5675675676
1829-Jackson.txt 48.32
1833-Jackson.txt 42.2333333333
1837-VanBuren.txt 43.9052631579
1841-Harrison.txt 43.6428571429
1845-Polk.txt 33.9607843137
1849-Taylor.txt 53.7272727273
1853-Pierce.txt 35.1634615385
1857-Buchanan.txt 34.808988764
1861-Lincoln.txt 29.0217391304
1865-Lincoln.txt 29.0740740741
1869-Grant.txt 30.2195121951
1873-Grant.txt 33.5909090909
1877-Hayes.txt 46.1694915254
1881-Garfield.txt 28.9196428571
1885-Cleveland.txt 41.5454545455
1889-Harrison.txt 30.2547770701
1893-Cleveland.txt 37.1206896552
1897-McKinley.txt 33.6230769231
1901-McKinley.txt 24.5
1905-Roosevelt.txt 33.0606060606
1909-Taft.txt 36.7672955975
1913-Wilson.txt 28.0147058824
1917-Wilson.txt 27.6
1921-Harding.txt 25.2080536913
1925-Coolidge.txt 22.5482233503
1929-Hoover.txt 24.6202531646
1933-Roosevelt.txt 24.2705882353
1937-Roosevelt.txt 21.03125
1941-Roosevelt.txt 22.5882352941
1945-Roosevelt.txt 24.5
1949-Truman.txt 21.7931034483
1953-Eisenhower.txt 22.5609756098
1957-Eisenhower.txt 20.8369565217
1961-Kennedy.txt 29.7307692308
1965-Johnson.txt 18.2446808511
1969-Nixon.txt 22.8773584906
1973-Nixon.txt 29.3913043478
1977-Carter.txt 26.0377358491
1981-Reagan.txt 22.0551181102
1985-Reagan.txt 23.380952381
1989-Bush.txt 18.7103448276
1993-Clinton.txt 22.9012345679
1997-Clinton.txt 21.9821428571
2001-Bush.txt 18.8144329897
2005-Bush.txt 25.0105263158
2009-Obama.txt 24.3392857143
>>> sum([sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) for fileid in inaugural.fileids()]) / len(inaugural.fileids())
2602.410714285714

所有文本的宏平均字长平均值

>>> sum(sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) / len(inaugural.sents(fileids=[fileid])) for fileid in inaugural.fileids()) / len(inaugural.fileids())
32.84054349411484
>>> from __future__ import division
>>> from nltk.corpus import inaugural
>>> inaugural.fileids()
[u'1789-Washington.txt', u'1793-Washington.txt', u'1797-Adams.txt', u'1801-Jefferson.txt', u'1805-Jefferson.txt', u'1809-Madison.txt', u'1813-Madison.txt', u'1817-Monroe.txt', u'1821-Monroe.txt', u'1825-Adams.txt', u'1829-Jackson.txt', u'1833-Jackson.txt', u'1837-VanBuren.txt', u'1841-Harrison.txt', u'1845-Polk.txt', u'1849-Taylor.txt', u'1853-Pierce.txt', u'1857-Buchanan.txt', u'1861-Lincoln.txt', u'1865-Lincoln.txt', u'1869-Grant.txt', u'1873-Grant.txt', u'1877-Hayes.txt', u'1881-Garfield.txt', u'1885-Cleveland.txt', u'1889-Harrison.txt', u'1893-Cleveland.txt', u'1897-McKinley.txt', u'1901-McKinley.txt', u'1905-Roosevelt.txt', u'1909-Taft.txt', u'1913-Wilson.txt', u'1917-Wilson.txt', u'1921-Harding.txt', u'1925-Coolidge.txt', u'1929-Hoover.txt', u'1933-Roosevelt.txt', u'1937-Roosevelt.txt', u'1941-Roosevelt.txt', u'1945-Roosevelt.txt', u'1949-Truman.txt', u'1953-Eisenhower.txt', u'1957-Eisenhower.txt', u'1961-Kennedy.txt', u'1965-Johnson.txt', u'1969-Nixon.txt', u'1973-Nixon.txt', u'1977-Carter.txt', u'1981-Reagan.txt', u'1985-Reagan.txt', u'1989-Bush.txt', u'1993-Clinton.txt', u'1997-Clinton.txt', u'2001-Bush.txt', u'2005-Bush.txt', u'2009-Obama.txt']
>>> for fileid in inaugural.fileids():
...     avg = sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) / len(inaugural.sents(fileids=[fileid]))
...     print fileid, avg
... 
1789-Washington.txt 64.0833333333
1793-Washington.txt 36.75
1797-Adams.txt 69.8648648649
1801-Jefferson.txt 46.0714285714
1805-Jefferson.txt 52.9777777778
1809-Madison.txt 60.2380952381
1813-Madison.txt 39.5151515152
1817-Monroe.txt 30.2704918033
1821-Monroe.txt 38.0542635659
1825-Adams.txt 42.5675675676
1829-Jackson.txt 48.32
1833-Jackson.txt 42.2333333333
1837-VanBuren.txt 43.9052631579
1841-Harrison.txt 43.6428571429
1845-Polk.txt 33.9607843137
1849-Taylor.txt 53.7272727273
1853-Pierce.txt 35.1634615385
1857-Buchanan.txt 34.808988764
1861-Lincoln.txt 29.0217391304
1865-Lincoln.txt 29.0740740741
1869-Grant.txt 30.2195121951
1873-Grant.txt 33.5909090909
1877-Hayes.txt 46.1694915254
1881-Garfield.txt 28.9196428571
1885-Cleveland.txt 41.5454545455
1889-Harrison.txt 30.2547770701
1893-Cleveland.txt 37.1206896552
1897-McKinley.txt 33.6230769231
1901-McKinley.txt 24.5
1905-Roosevelt.txt 33.0606060606
1909-Taft.txt 36.7672955975
1913-Wilson.txt 28.0147058824
1917-Wilson.txt 27.6
1921-Harding.txt 25.2080536913
1925-Coolidge.txt 22.5482233503
1929-Hoover.txt 24.6202531646
1933-Roosevelt.txt 24.2705882353
1937-Roosevelt.txt 21.03125
1941-Roosevelt.txt 22.5882352941
1945-Roosevelt.txt 24.5
1949-Truman.txt 21.7931034483
1953-Eisenhower.txt 22.5609756098
1957-Eisenhower.txt 20.8369565217
1961-Kennedy.txt 29.7307692308
1965-Johnson.txt 18.2446808511
1969-Nixon.txt 22.8773584906
1973-Nixon.txt 29.3913043478
1977-Carter.txt 26.0377358491
1981-Reagan.txt 22.0551181102
1985-Reagan.txt 23.380952381
1989-Bush.txt 18.7103448276
1993-Clinton.txt 22.9012345679
1997-Clinton.txt 21.9821428571
2001-Bush.txt 18.8144329897
2005-Bush.txt 25.0105263158
2009-Obama.txt 24.3392857143
>>> sum([sum(len(sent) for sent in inaugural.sents(fileids=[fileid])) for fileid in inaugural.fileids()]) / len(inaugural.fileids())
2602.410714285714

请具体说明您有什么问题。到目前为止你试过什么?你犯了什么错误?我道歉。我现在编辑了我的作品。我很感激你的回答,但这基本上给了我与我已有的相同的解决方案。我不是在寻找语料库中所有句子的平均值,而是要寻找语料库中的每个文本,以便我能够比较平均值。只是想澄清一下,你想要“平均文本长度/文本”还是“平均发送长度/文本”?如我的作品中所述,平均文本句子长度/文本。不过,谢谢你,因为你已经帮我编辑了你的答案!我很高兴这个答案有用。