Shell 如何在两个相同的标记模式之间获取特定数据

Shell 如何在两个相同的标记模式之间获取特定数据,shell,awk,sed,grep,sh,Shell,Awk,Sed,Grep,Sh,使用awk或sed,如何选择发生在两个相同标记模式之间的线?可能有多个标有这些图案的部分 例如:假设文件包含: $$$ lines between dollar and AT @@@ lines between first and second AT @@@ lines between second and third AT @@@ lines between third and fourth AT @@@ 使用 我得到的内容介于$$$和第一次出现的@@@之间 我的疑问是,如何在第一次和第三次

使用awk或sed,如何选择发生在两个相同标记模式之间的线?可能有多个标有这些图案的部分

例如:假设文件包含:

$$$
lines between dollar and AT
@@@
lines between first and second AT
@@@
lines between second and third AT
@@@
lines between third and fourth AT
@@@
使用

我得到的内容介于$$$和第一次出现的@@@之间

我的疑问是,如何在第一次和第三次出现@@@

预期产出为:

lines between first and second AT
@@@
lines between second and third AT

awk
似乎是这项工作的更明智的工具,这主要是因为它允许您在命令行上比sed更容易地指定参数(也就是说,它可以理智地处理数字)

我会用

awk -v pattern='^@@@$' -v first=1 -v last=3 '$0 ~ pattern { ++count; if(count == first) next } count == last { exit } count >= first' 2.txt
这项工作如下:

$0 ~ pattern {              # When the delimiter pattern is found:
  ++count                   # increase counter.
  if(count == first) {      # If we found the starting pattern
    next                    # skip to next line. This handles the fencepost.
  }
}
count == last {             # If we found the end pattern, stop processing.
  exit
}
count >= first              # Otherwise, if the line comes after the starting
                            # pattern, print the line.

添加一个计数器,检查计数器,如果计数器=值,则打印数据作为开始:
awk'/@@@cnt+=1}cnt;cnt==3{exit}input.txt
可能还会在Fredrik Pihl注释中添加以下内容,即删去第一行和最后一行<代码>awk'/@@@cnt+=1}cnt;cnt==3{exit}'sed_sample.txt|sed-e'1,1d'-e'$d'。
$0 ~ pattern {              # When the delimiter pattern is found:
  ++count                   # increase counter.
  if(count == first) {      # If we found the starting pattern
    next                    # skip to next line. This handles the fencepost.
  }
}
count == last {             # If we found the end pattern, stop processing.
  exit
}
count >= first              # Otherwise, if the line comes after the starting
                            # pattern, print the line.