返回Python中Splunk搜索的错误数

返回Python中Splunk搜索的错误数,python,splunk,Python,Splunk,是否有任何方法可以获取使用splunklib.results模块或任何splunklib模块进行Splunk搜索期间发生的错误数 以下是我目前的代码: #purpose of script: To connect to Splunk, execute a query, and write the query results out to an excel file. #query results = multiple dynamic # of rows. 7 columns. #!/usr/

是否有任何方法可以获取使用splunklib.results模块或任何splunklib模块进行Splunk搜索期间发生的错误数

以下是我目前的代码:

#purpose of script: To connect to Splunk, execute a query, and write the query results out to an excel file.
#query results = multiple dynamic # of rows. 7 columns. 

#!/usr/bin/env python
import splunklib.client as client #splunklib.client class is used to connect to splunk, authenticate, and maintain session
import splunklib.results as results #module for returning results and printing/writing them out

listOfAppIDs = []
#open file to read each line and add each line in file to an array. These are our appID's to search
with open('filelocation.txt', 'r') as fi:
    for line in fi:
        listOfAppIDs.append(line.rstrip('\n'))
print listOfAppIDs

#identify variables used to log in
HOST = "8.8.8.8"
PORT = 8089
USERNAME = "uName"
PASSWORD = "pWord"

startPoint = "appID1" #initial start point in array

outputCsv = open('filelocation.csv', 'wb')
fieldnames = ['Application ID', 'transport', 'dst_port', 'Average Throughput per Month','Total Sessions Allowed', 'Unique Source IPs', 'Unique Destination IPs']
writer = csv.DictWriter(outputCsv, fieldnames=fieldnames)
writer.writeheader();

def connect():
    global startPoint , item
    print "startPoint: " + startPoint

    #Create a service instance by using the connect function and log in
    service = client.connect(
        host=HOST,
        port=PORT,
        username=USERNAME,
        password=PASSWORD,
        autologin=True
    )   
    jobs = service.jobs# Get the collection of jobs/searches
    kwargs_blockingsearch = {"exec_mode": "normal"}

    try:
        for item in listOfAppIDs:
            errorCount=0
            print "item: " + item
            if (item >= startPoint):    
                searchquery_blocking = "search splunkQery"
                print item + ':'
                job = jobs.create(searchquery_blocking, **kwargs_blockingsearch) # A blocking search returns query result. Search executes here
                print "Splunk query for appID " , item , " completed! \n"
                resultCount = job["resultCount"] #number of results this job (splunk query) returned
                print "result count " , resultCount
                rr = results.ResultsReader(job.results())
                for result in rr:
                    if isinstance(result, results.Message):
                        # Diagnostic messages may be returned in the results
                        # Check the type and do something.
                        if result.type == log_type:
                            print '%s: %s' % (result.type, result.message)
                            errorCount+=1
                    elif isinstance(result, dict):
                        # Normal events are returned as dicts
                        # Do something with them if required.
                        print result
                        writer.writerow([result + errorCount])
                        pass
                assert rr.is_preview == False
    except:
        print "\nexcept\n"
        startPoint = item #returh to connect function but start where startPoint is at in array
        connect()

   print "done!"    

connect()
我在上面的代码中得到以下错误:

'orderedict'对象没有属性“messages”

from splunklib import results
my_feed=results.ResultsReader(open("results.xml"))

log_type='ERROR'

n_errors=0
for result in my_feed.results:
    if isinstance(result, results.Message):
       if result.type==log_type:
          print result.message
          n_errors+=1

data.load()可能有问题,因为它需要一个带有单个根节点的xml。如果在一个提要中有多个结果节点,则可以围绕此问题包装提要,即:
“+open(“feed.xml”).read()”

如果您可以访问原始提要而不是数据对象,那么可以使用lxml代替splunk lib

len( lxml.etree.parse("results.xml").findall("//messages/msg[@type='ERROR']") )

下面是一个基于splunklib文档的完整示例
ResultsReader
解析atom提要并为您的每个结果调用
data.load()

      import splunklib.client as client
      import splunklib.results as results
      from time import sleep

      log_type='ERROR'

      service = client.connect(...)
      job = service.jobs.create("search * | head 5")
      while not job.is_done():
          sleep(.2)
      rr = results.ResultsReader(job.results())
      for result in rr:
          if isinstance(result, results.Message):
              # Diagnostic messages may be returned in the results
              # Check the type and do something.
              if result.type == log_type:
                 print '%s: %s' % (result.type, result.message)
          elif isinstance(result, dict):
              # Normal events are returned as dicts
              # Do something with them if required.
              pass
      assert rr.is_preview == False

在哪里可以找到我的
feed.xml
文档?添加了一个完整的示例。。。检查客户端文档中的不同
client.connect()
参数。您的xml通过rest api从作业中检索。。。创建新作业(如示例作业),或使用
client.job()
方法通过
sid
检索计划作业。否则,计划作业可能会将其结果存储在splunk服务器上的xml文件中。然后您可以直接访问它。您对该脚本的搜索是以什么类型的模式运行的?正常、阻塞、一次性、导出?默认情况下是正常的,检查每个模式下的示例代码。哈哈哈主机=“8.8.8.8”。这是谷歌的DNS服务器;-)