Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何并行读取csv并并行写入Cassandra以实现高吞吐量?_Python_Python 3.x_Cassandra_Dask - Fatal编程技术网

Python 如何并行读取csv并并行写入Cassandra以实现高吞吐量?

Python 如何并行读取csv并并行写入Cassandra以实现高吞吐量?,python,python-3.x,cassandra,dask,Python,Python 3.x,Cassandra,Dask,我曾尝试在Cassandra中使用execute、execute\u async和execute\u concurrent,但对于读取10万行,我可以在不少于55分钟的时间内在Cassandra中对它们进行索引。注意,我已经将并发线程设置为1000,并将YAML文件的并发读写限制也调整为10000。我在创建集群时尝试了复制因子0、1、2。没有人能在更短的时间内为文件编制索引。因此,我决定,与其按顺序读取csv,将其添加到列表中,然后以批处理、并发模式或异步模式写入Cassandra,不如并行读取

我曾尝试在Cassandra中使用
execute
execute\u async
execute\u concurrent
,但对于读取10万行,我可以在不少于55分钟的时间内在Cassandra中对它们进行索引。注意,我已经将并发线程设置为1000,并将YAML文件的并发读写限制也调整为10000。我在创建集群时尝试了复制因子0、1、2。没有人能在更短的时间内为文件编制索引。因此,我决定,与其按顺序读取csv,将其添加到列表中,然后以批处理、并发模式或异步模式写入Cassandra,不如并行读取csv?!因此,我使用dask来读取10M行的csv文件

import json
import logging
from datetime import datetime
import dask.dataframe as dd
import dask.multiprocessing
import sys
import json

import pandas as pd
from cassandra import ConsistencyLevel, WriteTimeout
from cassandra.cluster import BatchStatement, Cluster
from cassandra.query import SimpleStatement
from cassandra.concurrent import execute_concurrent, execute_concurrent_with_args


class PythonCassandraExample:
    def __init__(self, version):
        self.cluster = None
        self.session = None
        self.keyspace = None
        self.log = None
        self.version = version

    def __del__(self):
        self.cluster.shutdown()

    def createsession(self):
        self.cluster = Cluster(['localhost'], connect_timeout=50)
        self.session = self.cluster.connect(self.keyspace)

    def getsession(self):
        return self.session

    # How about Adding some log info to see what went wrong
    def setlogger(self):
        log = logging.getLogger()
        log.setLevel('INFO')
        handler = logging.StreamHandler()
        handler.setFormatter(logging.Formatter(
            "%(asctime)s [%(levelname)s] %(name)s: %(message)s"))
        log.addHandler(handler)
        self.log = log
    # Create Keyspace based on Given Name

    def handle_error(self, exception):
        self.log.error("Failed to fetch user info: %s", exception)


    def createkeyspace(self, keyspace):
        """
        :param keyspace:  The Name of Keyspace to be created
        :return:
        """
        # Before we create new lets check if exiting keyspace; we will drop that and create new
        rows = self.session.execute(
            "SELECT keyspace_name FROM system_schema.keyspaces")
        if keyspace in [row[0] for row in rows]:
            self.log.info("dropping existing keyspace...")
            self.session.execute("DROP KEYSPACE " + keyspace)

        self.log.info("creating keyspace...")
        self.session.execute("""
                CREATE KEYSPACE %s
                WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '3' }
                """ % keyspace)

        self.log.info("setting keyspace...")
        self.session.set_keyspace(keyspace)

    def create_table(self, table_name):
        self.table_name = table_name
        c_sql = "CREATE TABLE IF NOT EXISTS {} (id varchar, version varchar, row varchar, PRIMARY KEY(id, version));".format(
            self.table_name)
        print("Query for creating table is: {}".format(c_sql))
        self.session.execute(c_sql)
        self.log.info("DP Table Created !!!")
        self.insert_sql = self.session.prepare(
            (
                "INSERT INTO  {} ({}, {}, {}) VALUES (?,?,?)"
            ).format(
                self.table_name, "id", "version", "row"
            )
        )

    # lets do some batch insert
    def insert_data(self, key, version, row):
        self.session.execute(
            self.insert_sql, [key, version, row]
        )
    @dask.delayed
    def print_a_block(self, d):
        d = d.to_dict(orient='records')
        for row in d:
            key = str(row["0"])
            row = json.dumps(row, default=str)
            self.insert_data(key, self.version, row)

if __name__ == '__main__':
    start_time = datetime.utcnow()
    example1 = PythonCassandraExample(version="version_1")
    example1.createsession()
    example1.setlogger()
    example1.createkeyspace('fri_athena_two')
    example1.create_table('TenMillion')
    example1.log.info("Calling compute!")
    df = dd.read_csv("/Users/aviralsrivastava/dev/levelsdb-learning/10gb.csv")
    dask.compute(*[example1.print_a_block(d) for d in df.to_delayed()])
    print(datetime.utcnow() - start_time)

即使使用dask,所有的努力都白费了一个小时,然而,将行写入Cassandra的任务还没有完成?为了减少所用时间,我还应该做些什么?

可能会更改为其他数据库引擎?卡桑德拉以索引速度慢而闻名;对于小数据集,您可以使用任何东西,比如10M行;有现代的选择;根据您的应用程序/数据模型,可能cassandra不适合?我每天要插入1亿行,在这种规模下,kyotocabinet太慢了,级别db也慢了。想法?这实际上取决于你想对数据做什么。如果这些文档具有复杂的索引,那么mongodb或couchdb可能是一个不错的选择。如果有很多“简单”的索引,那么MySQL非常适合。另一种选择是在插入时计算索引值并将数据存储在redis中,redis速度非常快,但行更新可能会很痛苦。如果你需要全文搜索,那就很棘手了(检查功能、原型以确认)。