Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/350.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/extjs/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Javascript Python HTML输出(第一次尝试),几个问题(包括代码)_Javascript_Python - Fatal编程技术网

Javascript Python HTML输出(第一次尝试),几个问题(包括代码)

Javascript Python HTML输出(第一次尝试),几个问题(包括代码),javascript,python,Javascript,Python,虽然我已经玩Python几个月了(只是一个业余爱好者),但我对Web编程知之甚少(一点HTML、零JavaScript等)。这就是说,我有一个当前的项目,让我第一次看网络编程。这让我问: 对于答案,我取得了一些进展。现在,我只使用Python和HTML。我不能发布我的项目代码,所以我用twitter搜索写了一个小例子(请看下面) 我的问题是: 我做了什么蠢事吗?我觉得WebOutput()很清晰,但效率很低。如果我使用JavaScript,我假设我可以编写一个HTML模板文件,然后只更新数据。

虽然我已经玩Python几个月了(只是一个业余爱好者),但我对Web编程知之甚少(一点HTML、零JavaScript等)。这就是说,我有一个当前的项目,让我第一次看网络编程。这让我问:

对于答案,我取得了一些进展。现在,我只使用Python和HTML。我不能发布我的项目代码,所以我用twitter搜索写了一个小例子(请看下面)

我的问题是:

  • 我做了什么蠢事吗?我觉得
    WebOutput()
    很清晰,但效率很低。如果我使用JavaScript,我假设我可以编写一个HTML模板文件,然后只更新数据。对更好的方法是什么

  • 在什么情况下,框架适合这样的应用程序?过度杀戮

  • 很抱歉问了一些基本的问题,但我不想花太多时间走上错误的道路

    import simplejson, urllib, time
    
    #query, results per page 
    query = "swineflu"
    rpp = 25
    jsonURL = "http://search.twitter.com/search.json?q=" + query + "&rpp=" + str(rpp)
    
    #currently storing all search results, really only need most recent but want the data avail for other stuff
    data = []
    
    #iterate over search results
    def SearchResults():
        jsonResults = simplejson.load(urllib.urlopen(jsonURL))
        for tweet in jsonResults["results"]:
            try:
                #terminal output
                feed = tweet["from_user"] + " | " + tweet["text"]
                print feed
                data.append(feed)
            except:
                print "exception??"
    
    # writes latest tweets to file/web
    def WebOutput():
        f = open("outw.html", "w")
        f.write("<html>\n")
        f.write("<title>python newb's twitter search</title>\n")
        f.write("<head><meta http-equiv='refresh' content='60'></head>\n")
        f.write("<body>\n")
        f.write("<h1 style='font-size:150%'>Python Newb's Twitter Search</h1>")
        f.write("<h2 style='font-size:125%'>Searching Twitter for: " + query + "</h2>\n")
        f.write("<h2 style='font-size:125%'>" + time.ctime() + " (updates every 60 seconds)</h2>\n")
    
        for i in range(1,rpp):
            try:
                f.write("<p style='font-size:90%'>" + data[-i] + "</p>\n")
            except:
                continue
    
        f.write("</body>\n")
        f.write("</html>\n")
        f.close()
    
    while True:
        print ""
        print "\nSearching Twitter for: " + query + " | current date/time is: " + time.ctime()
        print ""
        SearchResults()
        WebOutput()
        time.sleep(60)
    
    导入simplejson、urllib、时间
    #查询,每页结果
    query=“swineflu”
    rpp=25
    jsonURL=”http://search.twitter.com/search.json?q=“+query+”&rpp=“+str(rpp)
    #目前存储所有搜索结果,真的只需要最新的,但希望数据用于其他东西
    数据=[]
    #迭代搜索结果
    def SearchResults():
    jsonResults=simplejson.load(urllib.urlopen(jsonURL))
    对于jsonResults[“results”]中的tweet:
    尝试:
    #终端输出
    feed=tweet[“来自用户”]+“|”+tweet[“文本”]
    打印馈送
    data.append(提要)
    除:
    打印“异常??”
    #将最新推文写入文件/web
    def WebOutput():
    f=打开(“outh.html”、“w”)
    f、 写入(“\n”)
    f、 编写(“python新手的twitter搜索\n”)
    f、 写入(“\n”)
    f、 写入(“\n”)
    f、 编写(“Python新手的Twitter搜索”)
    f、 写入(“在Twitter上搜索“+query+”\n”)
    f、 写入(“+time.ctime()+”(每60秒更新一次)\n)
    对于范围(1,rpp)内的i:
    尝试:
    f、 写入(“

    然后将WebOutput()的整个主体替换为:

    最后,您将创建一个文件
    /path/to/mytmpl.txt
    ,如下所示:

    <html>
    <title>python newb's twitter search</title>
    <head><meta http-equiv='refresh' content='60'></head>
    <body>
    <h1 style='font-size:150%'>Python Newb's Twitter Search</h1>
    <h2 style='font-size:125%'>Searching Twitter for: ${query}</h2>
    <h2 style='font-size:125%'>${time} (updates every 60 seconds)</h2>
    
    % for datum in data:
        <p style'font-size:90%'>${datum}</p>
    % endfor
    
    </body>
    </html>
    
    
    python新手的twitter搜索
    Python新手的Twitter搜索
    正在Twitter上搜索:${query}
    ${time}(每60秒更新一次)
    %对于数据中的基准:
    

    ${datum}

    %结束
    您可以看到,您完成的一件好事是将输出(或web术语中的“视图层”)与获取和格式化数据的代码(模型层”和“控制器层”)分离。这将使您在将来更容易更改脚本的输出

    (注意:我没有测试我在这里介绍的代码;如果它不太正确,我深表歉意。不过它基本上应该可以工作)

    可以使事情更整洁,更不容易出错

    简单示例,
    %s
    替换为
    标题

    my_html = "<html><body><h1>%s</h1></body></html>" % ("a title")
    
    您也可以在执行
    %s
    操作时使用命名键,如
    %(键)s
    ,这意味着您不必跟踪
    %s
    的顺序。使用a代替a,它将键映射到一个值:

    my_html = "<html><body><h1>%(title)s</h1>%(content)s</body></html>" % {
        "title": "a title",
        "content":"my content"
    }
    
    最后,只有在访问需要登录的数据时才需要twitterd模块。公共时间线是公共的,可以在没有任何身份验证的情况下访问,因此您可以删除twitter导入和
    api=
    行。如果确实要使用twitterd,则必须使用
    api
    变量,例如:

    api = twitterd.Api(username='username', password='password')
    statuses = api.GetPublicTimeline()
    
    因此,我编写脚本的方式可能是:

    import time
    import urllib
    import simplejson
    
    def search_results(query, rpp = 25): # 25 is default value for rpp
        url = "http://search.twitter.com/search.json?q=%s&%s" % (query, rpp)
    
        jsonResults = simplejson.load(urllib.urlopen(url))
    
        data = [] # setup empty list, within function scope
        for tweet in jsonResults["results"]:
            # Unicode!
            # And tweet is a dict, so we can use the string-formmating key thing
            data.append(u"%(from_user)s | %(text)s" % tweet)
    
        return data # instead of modifying the global data!
    
    def web_output(data, query):
        results_html = ""
    
        # loop over each index of data, storing the item in "result"
        for result in data:
            # append to string
            results_html += "    <p style='font-size:90%%'>%s</p>\n" % (result)
    
        html = """<html>
        <head>
        <meta http-equiv='refresh' content='60'>
        <title>python newb's twitter search</title>
        </head>
        <body>
            <h1 style='font-size:150%%'>Python Newb's Twitter Search</h1>
            <h2 style='font-size:125%%'>Searching Twitter for: %(query)s</h2>
            <h2 style='font-size:125%%'> %(ctime)s (updates every 60 seconds)</h2>
        %(results_html)s
        </body>
        </html>
        """ % {
            'query': query,
            'ctime': time.ctime(),
            'results_html': results_html
        }
    
        return html
    
    
    def main():
        query_string = "swineflu"
        results = search_results(query_string) # second value defaults to 25
    
        html = web_output(results, query_string)
    
        # Moved the file writing stuff to main, so WebOutput is reusable
        f = open("outw.html", "w")
        f.write(html)
        f.close()
    
        # Once the file is written, display the output to the terminal:
        for formatted_tweet in results:
            # the .encode() turns the unicode string into an ASCII one, ignoring
            # characters it cannot display correctly
            print formatted_tweet.encode('ascii', 'ignore')
    
    
    if __name__ == '__main__':
        main()
    # Common Python idiom, only runs main if directly run (not imported).
    # Then means you can do..
    
    # import myscript
    # myscript.search_results("#python")
    
    # without your "main" function being run
    
    保存并运行该脚本,然后浏览到
    http://0.0.0.0:8080/
    它将显示您的页面

    问题在于,每次加载页面时,它都会查询Twitter API。如果只是你在使用它,这将不会是一个问题,但随着数百(甚至数十)人查看该页面,它将开始变慢(最终你可能会受到twitter API的速率限制/阻止)

    解决方案基本上回到了起点。。您可以将搜索结果写入(缓存)光盘,然后重新搜索
    my_html = "<html><body><h1>%(title)s</h1>%(content)s</body></html>" % {
        "title": "a title",
        "content":"my content"
    }
    
    results = SearchResults("swineflu", 25)
    html = WebOutput(results)
    f = open("outw.html", "w")
    f.write(html)
    f.close()
    
    api = twitterd.Api(username='username', password='password')
    statuses = api.GetPublicTimeline()
    
    import time
    import urllib
    import simplejson
    
    def search_results(query, rpp = 25): # 25 is default value for rpp
        url = "http://search.twitter.com/search.json?q=%s&%s" % (query, rpp)
    
        jsonResults = simplejson.load(urllib.urlopen(url))
    
        data = [] # setup empty list, within function scope
        for tweet in jsonResults["results"]:
            # Unicode!
            # And tweet is a dict, so we can use the string-formmating key thing
            data.append(u"%(from_user)s | %(text)s" % tweet)
    
        return data # instead of modifying the global data!
    
    def web_output(data, query):
        results_html = ""
    
        # loop over each index of data, storing the item in "result"
        for result in data:
            # append to string
            results_html += "    <p style='font-size:90%%'>%s</p>\n" % (result)
    
        html = """<html>
        <head>
        <meta http-equiv='refresh' content='60'>
        <title>python newb's twitter search</title>
        </head>
        <body>
            <h1 style='font-size:150%%'>Python Newb's Twitter Search</h1>
            <h2 style='font-size:125%%'>Searching Twitter for: %(query)s</h2>
            <h2 style='font-size:125%%'> %(ctime)s (updates every 60 seconds)</h2>
        %(results_html)s
        </body>
        </html>
        """ % {
            'query': query,
            'ctime': time.ctime(),
            'results_html': results_html
        }
    
        return html
    
    
    def main():
        query_string = "swineflu"
        results = search_results(query_string) # second value defaults to 25
    
        html = web_output(results, query_string)
    
        # Moved the file writing stuff to main, so WebOutput is reusable
        f = open("outw.html", "w")
        f.write(html)
        f.close()
    
        # Once the file is written, display the output to the terminal:
        for formatted_tweet in results:
            # the .encode() turns the unicode string into an ASCII one, ignoring
            # characters it cannot display correctly
            print formatted_tweet.encode('ascii', 'ignore')
    
    
    if __name__ == '__main__':
        main()
    # Common Python idiom, only runs main if directly run (not imported).
    # Then means you can do..
    
    # import myscript
    # myscript.search_results("#python")
    
    # without your "main" function being run
    
    import cherrypy
    
    # import the twitter_searcher.py script
    import twitter_searcher
    # you can now call the the functions in that script, for example:
    # twitter_searcher.search_results("something")
    
    class TwitterSearcher(object):
        def index(self):
            query_string = "swineflu"
            results = twitter_searcher.search_results(query_string) # second value defaults to 25
            html = twitter_searcher.web_output(results, query_string)
    
            return html
        index.exposed = True
    
    cherrypy.quickstart(TwitterSearcher())