Sql 如何使用带有BigQuery的reddit数据集查找n-gram

Sql 如何使用带有BigQuery的reddit数据集查找n-gram,sql,google-bigquery,reddit,Sql,Google Bigquery,Reddit,我正在查看reddit数据集,以及一个使用BigQuery查找Big-gram的应用程序-但是,这个问题的答案对于URL、引号等不起作用。是否有更好的方法来实现这一点,并将其概括为tri-gram而不是Big-gram?这样可以: SELECT word, nextword, nextword2, COUNT(*) c FROM ( SELECT pos, id, word, LEAD(word) OVER(PARTITION BY id ORDER BY pos) nextword, LEA

我正在查看reddit数据集,以及一个使用BigQuery查找Big-gram的应用程序-但是,这个问题的答案对于URL、引号等不起作用。是否有更好的方法来实现这一点,并将其概括为tri-gram而不是Big-gram?

这样可以:

SELECT word, nextword, nextword2, COUNT(*) c 
FROM (
SELECT pos, id, word, LEAD(word) OVER(PARTITION BY id ORDER BY pos) nextword, LEAD(word, 2) OVER(PARTITION BY id ORDER BY pos) nextword2 FROM (
SELECT id, word, pos FROM FLATTEN(
  (SELECT id, REGEXP_REPLACE(word, 'QUOTE', "'") word, POSITION(word) pos FROM
   (SELECT id, SPLIT(REGEXP_REPLACE(REGEXP_REPLACE(REGEXP_REPLACE(LOWER(body), "'", 'QUOTE'), r'http.?://[^ ]*', r'URL'), r'\b', ' '), ' ') word 
    FROM [fh-bigquery:reddit_comments.2016_01]
    WHERE score>200
    HAVING REGEXP_MATCH(word, '[a-zA-Z0-9]')
   )
  ), word)
))
WHERE nextword IS NOT null
GROUP EACH BY 1, 2, 3
ORDER BY c DESC
LIMIT 100
(请注意,我正在筛选得分>200的评论,以获得更快的结果-您可以整月移动该过滤器)

这样可以:

SELECT word, nextword, nextword2, COUNT(*) c 
FROM (
SELECT pos, id, word, LEAD(word) OVER(PARTITION BY id ORDER BY pos) nextword, LEAD(word, 2) OVER(PARTITION BY id ORDER BY pos) nextword2 FROM (
SELECT id, word, pos FROM FLATTEN(
  (SELECT id, REGEXP_REPLACE(word, 'QUOTE', "'") word, POSITION(word) pos FROM
   (SELECT id, SPLIT(REGEXP_REPLACE(REGEXP_REPLACE(REGEXP_REPLACE(LOWER(body), "'", 'QUOTE'), r'http.?://[^ ]*', r'URL'), r'\b', ' '), ' ') word 
    FROM [fh-bigquery:reddit_comments.2016_01]
    WHERE score>200
    HAVING REGEXP_MATCH(word, '[a-zA-Z0-9]')
   )
  ), word)
))
WHERE nextword IS NOT null
GROUP EACH BY 1, 2, 3
ORDER BY c DESC
LIMIT 100
(请注意,我正在筛选得分>200的评论,以获得更快的结果-您可以整月移动该过滤器)