Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/mysql/72.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/unit-testing/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何使用logstash将Mysql数据迁移到elasticsearch_Mysql_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Logstash_Kibana - Fatal编程技术网 elasticsearch,logstash,kibana,Mysql,elasticsearch,Logstash,Kibana" /> elasticsearch,logstash,kibana,Mysql,elasticsearch,Logstash,Kibana" />

如何使用logstash将Mysql数据迁移到elasticsearch

如何使用logstash将Mysql数据迁移到elasticsearch,mysql,elasticsearch,logstash,kibana,Mysql,elasticsearch,Logstash,Kibana,我需要一个关于如何使用logstash将MySQL数据转换为弹性搜索的简要说明。 有人能解释一下这一步的步骤吗?你可以使用for logstash来完成 这是一个广泛的问题,我不知道你对MySQL和ES有多熟悉。假设您有一个表用户。您只需将其转储为csv,然后在您的ES中加载即可。但是,如果您有一个动态数据,比如MySQL,就像一个管道,那么您需要编写一个脚本来完成这些工作。无论如何,你可以检查下面的链接,以建立你的基本知识,然后再问如何 另外,因为您可能想知道如何将CSV转换为json文件,这

我需要一个关于如何使用logstash将MySQL数据转换为弹性搜索的简要说明。 有人能解释一下这一步的步骤吗?你可以使用for logstash来完成


这是一个广泛的问题,我不知道你对MySQL和ES有多熟悉。假设您有一个表用户。您只需将其转储为csv,然后在您的ES中加载即可。但是,如果您有一个动态数据,比如MySQL,就像一个管道,那么您需要编写一个脚本来完成这些工作。无论如何,你可以检查下面的链接,以建立你的基本知识,然后再问如何

另外,因为您可能想知道如何将CSV转换为json文件,这是ES了解的最佳套件


让我为您提供一个高级指令集

安装Logstash和Elasticsearch。 在Logstash bin文件夹中复制jar ojdbc7.jar。 对于logstash,创建一个配置文件ex:config.yml 去垃圾箱,像你一样跑 logstash-f config.yml
# 
input {
    # Get the data from database, configure fields to get data incrementally
    jdbc {
        jdbc_driver_library => "./ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_connection_string => "jdbc:oracle:thin:@db:1521:instance"
        jdbc_user => "user"
        jdbc_password => "pwd"

        id => "some_id"

        jdbc_validate_connection => true
        jdbc_validation_timeout => 1800
        connection_retry_attempts => 10
        connection_retry_attempts_wait_time => 10

        #fetch the db logs using logid
        statement => "select * from customer.table where logid > :sql_last_value order by logid asc"

        #limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
        jdbc_fetch_size => 500
        jdbc_default_timezone => "America/New_York"

        use_column_value => true
        tracking_column => "logid"
        tracking_column_type => "numeric"
        record_last_run => true

        schedule => "*/2 * * * *"

        type => "log.customer.table"
        add_field => {"source" => "customer.table"}
        add_field => {"tags" => "customer.table" } 
        add_field => {"logLevel" => "ERROR" }

        last_run_metadata_path => "last_run_metadata_path_table.txt"
    }

}

# Massage the data to store in index
filter {
    if [type] == 'log.customer.table' {
        #assign values from db column to custom fields of index
        ruby{
            code => "event.set( 'errorid', event.get('ssoerrorid') );
                    event.set( 'msg', event.get('errormessage') );
                    event.set( 'logTimeStamp', event.get('date_created'));
                    event.set( '@timestamp', event.get('date_created'));
                    "
        }
        #remove the db columns that were mapped to custom fields of index
        mutate {
            remove_field => ["ssoerrorid","errormessage","date_created" ]
        }
    }#end of [type] == 'log.customer.table' 
} #end of filter

# Insert into index
output {
    if [type] == 'log.customer.table' {
        amazon_es {
            hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
            region => "us-east-1"
            aws_access_key_id => '<access key>'
            aws_secret_access_key => '<secret password>'
            index => "production-logs-table-%{+YYYY.MM.dd}"
        }
    }
}