Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/330.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/regex/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 将UnixTimestamp转换为Cassandra的TIMEUUID_Java_Unix Timestamp_Cassandra 3.0_Timeuuid_Nosql - Fatal编程技术网

Java 将UnixTimestamp转换为Cassandra的TIMEUUID

Java 将UnixTimestamp转换为Cassandra的TIMEUUID,java,unix-timestamp,cassandra-3.0,timeuuid,nosql,Java,Unix Timestamp,Cassandra 3.0,Timeuuid,Nosql,我正在学习ApacheCassandra3.x.x的所有知识,我正在尝试开发一些可供使用的东西。问题是我想将数据存储到包含以下列的Cassandra表中: id (UUID - Primary Key) | Message (TEXT) | REQ_Timestamp (TIMEUUID) | Now_Timestamp (TIMEUUID) REQ_Timestamp具有消息在前端级别离开客户端的时间。另一方面,时间戳是消息最终存储在Cassandra中的时间。我需要这两个时间戳,因为我想测

我正在学习ApacheCassandra3.x.x的所有知识,我正在尝试开发一些可供使用的东西。问题是我想将数据存储到包含以下列的Cassandra表中:

id (UUID - Primary Key) | Message (TEXT) | REQ_Timestamp (TIMEUUID) | Now_Timestamp (TIMEUUID)
REQ_Timestamp具有消息在前端级别离开客户端的时间。另一方面,时间戳是消息最终存储在Cassandra中的时间。我需要这两个时间戳,因为我想测量从其源处理请求到安全存储数据所需的时间

创建Now_时间戳很容易,我只使用Now()函数,它会自动生成TIMEUUID。问题出现在REQ_时间戳上。如何将Unix时间戳转换为TIMEUUID,以便Cassandra能够存储它?这可能吗


我的后端架构是这样的:我将JSON格式的数据从前端获取到一个web服务,该web服务处理数据并将其存储在Kafka中。然后,Spark流媒体作业将卡夫卡日志放入Cassandra

这是我的Web服务,它将数据放入卡夫卡

@Path("/")
public class MemoIn {

    @POST
    @Path("/in")
    @Consumes(MediaType.APPLICATION_JSON)
    @Produces(MediaType.TEXT_PLAIN)
    public Response goInKafka(InputStream incomingData){
        StringBuilder bld = new StringBuilder();
        try {
            BufferedReader in = new BufferedReader(new InputStreamReader(incomingData));
            String line = null;
            while ((line = in.readLine()) != null) {
                bld.append(line);
            }
        } catch (Exception e) {
            System.out.println("Error Parsing: - ");
        }
        System.out.println("Data Received: " + bld.toString());

        JSONObject obj = new JSONObject(bld.toString());
        String line = obj.getString("id_memo") + "|" + obj.getString("id_writer") +
                                 "|" + obj.getString("id_diseased")
                                 + "|" + obj.getString("memo") + "|" + obj.getLong("req_timestamp");

        try {
            KafkaLogWriter.addToLog(line);
        } catch (Exception e) {
            e.printStackTrace();
        }

        return Response.status(200).entity(line).build();
    }


}
这是我的卡夫卡作家

    package main.java.vcemetery.webservice;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
import org.apache.kafka.clients.producer.Producer;

public class KafkaLogWriter {

    public static void addToLog(String memo)throws Exception {
        // private static Scanner in;
            String topicName = "MemosLog";

            /*
            First, we set the properties of the Kafka Log
             */
            Properties props = new Properties();
            props.put("bootstrap.servers", "localhost:9092");
            props.put("acks", "all");
            props.put("retries", 0);
            props.put("batch.size", 16384);
            props.put("linger.ms", 1);
            props.put("buffer.memory", 33554432);
            props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

            // We create the producer
            Producer<String, String> producer = new KafkaProducer<>(props);
            // We send the line into the producer
            producer.send(new ProducerRecord<>(topicName, memo));
            // We close the producer
            producer.close();

    }
}
package main.java.vcemetry.webservice;
导入org.apache.kafka.clients.producer.KafkaProducer;
导入org.apache.kafka.clients.producer.ProducerRecord;
导入java.util.Properties;
导入org.apache.kafka.clients.producer.producer;
公共类卡夫卡卢格编剧{
公共静态void addToLog(字符串备忘录)引发异常{
//专用静态扫描仪;
字符串topicName=“MemosLog”;
/*
首先,我们设置Kafka日志的属性
*/
Properties props=新属性();
put(“bootstrap.servers”,“localhost:9092”);
道具放置(“阿克斯”、“全部”);
道具放置(“重试”,0);
道具放置(“批量大小”,16384);
道具放置(“玲儿小姐”,1);
props.put(“buffer.memory”,33554432);
put(“key.serializer”、“org.apache.kafka.common.serialization.StringSerializer”);
put(“value.serializer”、“org.apache.kafka.common.serialization.StringSerializer”);
//我们创造了制作人
制作人=新卡夫卡制作人(道具);
//我们把生产线送到生产商那里
生产商发送(新生产商记录(主题名称、备忘录));
//我们关闭制作人
producer.close();
}
}
最后,这是我的Spark流媒体工作

public class MemoStream {

    public static void main(String[] args) throws Exception {
        Logger.getLogger("org").setLevel(Level.ERROR);
        Logger.getLogger("akka").setLevel(Level.ERROR);

        // Create the context with a 1 second batch size
        SparkConf sparkConf = new SparkConf().setAppName("KafkaSparkExample").setMaster("local[2]");
        JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, Durations.seconds(10));

        Map<String, Object> kafkaParams = new HashMap<>();
        kafkaParams.put("bootstrap.servers", "localhost:9092");
        kafkaParams.put("key.deserializer", StringDeserializer.class);
        kafkaParams.put("value.deserializer", StringDeserializer.class);
        kafkaParams.put("group.id", "group1");
        kafkaParams.put("auto.offset.reset", "latest");
        kafkaParams.put("enable.auto.commit", false);

        /* Se crea un array con los tópicos a consultar, en este caso solamente un tópico */
        Collection<String> topics = Arrays.asList("MemosLog");

        final JavaInputDStream<ConsumerRecord<String, String>> kafkaStream =
                KafkaUtils.createDirectStream(
                        ssc,
                        LocationStrategies.PreferConsistent(),
                        ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
                );

        kafkaStream.mapToPair(record -> new Tuple2<>(record.key(), record.value()));
        // Split each bucket of kafka data into memos a splitable stream
        JavaDStream<String> stream = kafkaStream.map(record -> (record.value().toString()));
        // Then, we split each stream into lines or memos
        JavaDStream<String> memos = stream.flatMap(x -> Arrays.asList(x.split("\n")).iterator());
        /*
         To split each memo into sections of ids and messages, we have to use the code \\ plus the character
          */
        JavaDStream<String> sections = memos.flatMap(y -> Arrays.asList(y.split("\\|")).iterator());
        sections.print();
        sections.foreachRDD(rdd -> {
           rdd.foreachPartition(partitionOfRecords -> {
               //We establish the connection with Cassandra
               Cluster cluster = null;
               try {
                   cluster = Cluster.builder()
                           .withClusterName("VCemeteryMemos") // ClusterName
                           .addContactPoint("127.0.0.1") // Host IP
                           .build();

               } finally {
                   if (cluster != null) cluster.close();
               }
               while(partitionOfRecords.hasNext()){


               }
           });
        });

        ssc.start();
        ssc.awaitTermination();

    }
}
公共类备忘录流{
公共静态void main(字符串[]args)引发异常{
Logger.getLogger(“org”).setLevel(Level.ERROR);
Logger.getLogger(“akka”).setLevel(Level.ERROR);
//创建具有1秒批量大小的上下文
SparkConf SparkConf=new SparkConf().setAppName(“kafkasparakexample”).setMaster(“本地[2]”);
JavaStreamingContext ssc=新的JavaStreamingContext(sparkConf,Durations.seconds(10));
Map kafkaParams=新HashMap();
kafkaParams.put(“bootstrap.servers”,“localhost:9092”);
kafkaParams.put(“key.deserializer”,StringDeserializer.class);
kafkaParams.put(“value.deserializer”,StringDeserializer.class);
kafkaParams.put(“group.id”,“group1”);
kafkaParams.put(“自动偏移重置”、“最新”);
kafkaParams.put(“enable.auto.commit”,false);
/*这是一个由顾问组成的阵列,是一个由顾问组成的太阳能电池组*/
集合主题=Arrays.asList(“MemosLog”);
最终JavaInputDStream kafkaStream=
KafkaUtils.createDirectStream(
ssc,
LocationStrategies.PreferConsistent(),
订阅(主题,卡夫卡帕拉)
);
kafkaStream.mapToPair(record->newtuple2(record.key(),record.value());
//将每一桶卡夫卡数据分割成一个可分割的流
JavaDStream=kafkaStream.map(记录->(记录.value().toString());
//然后,我们将每条流拆分为行或备忘录
JavaDStream memos=stream.flatMap(x->Arrays.asList(x.split(“\n”)).iterator();
/*
要将每个备忘录拆分为ID和消息的部分,我们必须使用代码\\加上字符
*/
JavaDStream sections=memos.flatMap(y->Arrays.asList(y.split(\\\\\)).iterator();
节。打印();
节。foreachRDD(rdd->{
rdd.foreachPartition(记录分区->{
//我们与卡桑德拉建立了联系
Cluster=null;
试一试{
cluster=cluster.builder()
.withClusterName(“vceMeterymos”)//ClusterName
.addContactPoint(“127.0.0.1”)//主机IP
.build();
}最后{
如果(cluster!=null)cluster.close();
}
while(partitionOfRecords.hasNext()){
}
});
});
ssc.start();
ssc.终止();
}
}

提前感谢。

Cassandra没有从UNIX时间戳转换的功能。您必须在客户端进行转换


Ref:

当我问这个问题时,我实际上是在引用同一个文档。关于如何在客户端进行转换有什么想法吗?我被困在这里了。这取决于你使用的客户。是司机吗?也许你可以展示一些你已经在做的事情的代码。我的后端架构是这样的:我将JSON格式的数据从前端传输到一个web服务,该服务处理数据并将其存储在Kafka中。然后,Spark流媒体作业将卡夫卡日志放入Cassandra。我将使用WebService/Kafka代码和我迄今为止编写的Spark代码编辑我的原始帖子。我没有使用Spark的经验,但这可能会有用:我