如何在scala中删除dataframe中字符串列中的数字

如何在scala中删除dataframe中字符串列中的数字,scala,apache-spark,Scala,Apache Spark,我正在scala中读取一个文本文件,我有以下行: 05:49:56.604899 00:00:00:00:00:02 > 00:00:00:00:00:03, ethertype IPv4 (0x0800), length 10202: 10.0.0.2.54880 > 10.0.0.3.5001: Flags [.], seq 3641977583:3641987719, ack 129899328, win 58, options [nop,nop,TS val 432623 e

我正在scala中读取一个文本文件,我有以下行:

05:49:56.604899 00:00:00:00:00:02 > 00:00:00:00:00:03, ethertype IPv4 (0x0800), length 10202: 10.0.0.2.54880 > 10.0.0.3.5001: Flags [.], seq 3641977583:3641987719, ack 129899328, win 58, options [nop,nop,TS val 432623 ecr 432619], length 10136
我使用以下代码提取模式:

+---------------+--------------+--------------+-----+-----+
|   time_stamp_0|   sender_ip_1| receiver_ip_2|label|count|
+---------------+--------------+--------------+-----+-----+
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    1|   19|
这是我的密码:

val customSchema = StructType(Array(
      StructField("time_stamp_0", StringType, true),
      StructField("sender_ip_1", StringType, true),
      StructField("receiver_ip_2", StringType, true),
      StructField("label", IntegerType, true)))

    ///////////////////////////////////////////////////make train dataframe
    val Dstream_Train = sc.textFile("/Users/saeedtkh/Desktop/sharedsaeed/Test/trace1.txt")
    val Row_Dstream_Train = Dstream_Train.map(line => line.split(">")).map(array => {
      val first = Try(array(0).trim.split(" ")(0)) getOrElse ""
      val second = Try(array(1).trim.split("")(6)) getOrElse ""
      val third = Try(array(2).trim.split(" ")(0).replace(":", "")) getOrElse ""
      Row.fromSeq(Seq(first, second, third, 1))
    })
    val Frist_Dataframe = session.createDataFrame(Row_Dstream_Train, customSchema).toDF("time_stamp_0", "sender_ip_1", "receiver_ip_2", "label")
    val columns1and2 = Window.partitionBy("sender_ip_1", "receiver_ip_2") // <-- matches groupBy
这意味着我需要省略IP的最后一个数字。(数字不是常数,而是可变的)


您能帮助我吗?

在您的示例中,最简单的方法是通过执行以下操作(Scala shell的输出)删除lambda中的悬挂端口号:

例如,在您的情况下,您将替换行中的第一个和第三个,如下所示:

val firstFixed = first.take(first.lastIndexOf("."))
val thirdFixed = third.take(third.lastIndexOf("."))
Row.fromSeq(Seq(firstFixed, second, thirdFixed, 1))

我会简单地使用regex,在我看来它更干净:

val regex = """^(\d\d:\d\d:\d\d.\d+).+length \d+: (\d{1,4}\.\d{1,4}\.\d{1,4}\.\d{1,4})\.\d+ > (\d{1,4}\.\d{1,4}\.\d{1,4}\.\d{1,4}).\d+.+""".r

val customSchema = StructType(Array(
  StructField("time_stamp_0", StringType, nullable = true),
  StructField("sender_ip_1", StringType, nullable = true),
  StructField("receiver_ip_2", StringType, nullable = true),
  StructField("label", IntegerType, nullable = true)))

val line = "05:49:56.604899 00:00:00:00:00:02 > 00:00:00:00:00:03, ethertype IPv4 (0x0800), length 10202: 10.0.0.2.54880 > 10.0.0.3.5001: Flags [.], seq 3641977583:3641987719, ack 129899328, win 58, options [nop,nop,TS val 432623 ecr 432619], length 10136"

val rdd = spark.sparkContext
  .parallelize(Seq(line))
  .map({
    case regex(timestamp, senderIp, receiverIp) =>
      Row(timestamp, senderIp, receiverIp, 1)
    case x => // Do something else here?
      throw new RuntimeException(s"Invalid line: $x")
  })

spark
  .createDataFrame(rdd, customSchema)
  .toDF("time_stamp_0", "sender_ip_1", "receiver_ip_2", "label")
  .show()
val firstFixed = first.take(first.lastIndexOf("."))
val thirdFixed = third.take(third.lastIndexOf("."))
Row.fromSeq(Seq(firstFixed, second, thirdFixed, 1))
val regex = """^(\d\d:\d\d:\d\d.\d+).+length \d+: (\d{1,4}\.\d{1,4}\.\d{1,4}\.\d{1,4})\.\d+ > (\d{1,4}\.\d{1,4}\.\d{1,4}\.\d{1,4}).\d+.+""".r

val customSchema = StructType(Array(
  StructField("time_stamp_0", StringType, nullable = true),
  StructField("sender_ip_1", StringType, nullable = true),
  StructField("receiver_ip_2", StringType, nullable = true),
  StructField("label", IntegerType, nullable = true)))

val line = "05:49:56.604899 00:00:00:00:00:02 > 00:00:00:00:00:03, ethertype IPv4 (0x0800), length 10202: 10.0.0.2.54880 > 10.0.0.3.5001: Flags [.], seq 3641977583:3641987719, ack 129899328, win 58, options [nop,nop,TS val 432623 ecr 432619], length 10136"

val rdd = spark.sparkContext
  .parallelize(Seq(line))
  .map({
    case regex(timestamp, senderIp, receiverIp) =>
      Row(timestamp, senderIp, receiverIp, 1)
    case x => // Do something else here?
      throw new RuntimeException(s"Invalid line: $x")
  })

spark
  .createDataFrame(rdd, customSchema)
  .toDF("time_stamp_0", "sender_ip_1", "receiver_ip_2", "label")
  .show()