Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/82.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Mysql Slick 3.0批量插入或更新(upsert)_Mysql_Sql_Scala_Slick_Typesafe - Fatal编程技术网

Mysql Slick 3.0批量插入或更新(upsert)

Mysql Slick 3.0批量插入或更新(upsert),mysql,sql,scala,slick,typesafe,Mysql,Sql,Scala,Slick,Typesafe,在Slick 3.0中进行批量插入更新的正确方法是什么 我在适当的查询位置使用MySQL INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6) ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b); 以下是我当前的代码,速度非常慢:-( 我要找的是相当于 def insertOrUpdate(values: Iterable[U]): DriverAction[MultiInsertResult, NoStre

在Slick 3.0中进行批量插入更新的正确方法是什么

我在适当的查询位置使用MySQL

INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);

以下是我当前的代码,速度非常慢:-(

我要找的是相当于

def insertOrUpdate(values: Iterable[U]): DriverAction[MultiInsertResult, NoStream, Effect.Write]
正如您在中看到的,您可以使用JDBC批插入功能使用
++=
函数进行插入。每个实例:

val foos = TableQuery[FooTable]
val rows: Seq[Foo] = ...
foos ++= rows // here slick will use batch insert
您还可以通过“分组”行序列来“调整”批处理的大小:

val batchSize = 1000
rows.grouped(batchSize).foreach { group => foos ++= group }

有几种方法可以使此代码更快(每种方法都应该比前面的方法快,但它逐渐变得不那么惯用):

  • 如果在光滑的第0.16.1页上,则运行
    insertOrUpdateAll
    而不是
    insertOrUpdate
    +

    await(run(TableQuery[FooTable].insertOrUpdateAll rows)).sum
    
  • 一次运行所有DBIO事件,而不是在运行下一个事件之前等待每个事件提交:

    val toBeInserted = rows.map { row => TableQuery[FooTable].insertOrUpdate(row) }
    val inOneGo = DBIO.sequence(toBeInserted)
    val dbioFuture = run(inOneGo)
    // Optionally, you can add a `.transactionally`
    // and / or `.withPinnedSession` here to pin all of these upserts
    // to the same transaction / connection
    // which *may* get you a little more speed:
    // val dbioFuture = run(inOneGo.transactionally)
    val rowsInserted = await(dbioFuture).sum
    
  • 下拉到JDBC级别并运行upsert all-in-one-go():

使用sqlu

这个演示工作

case ("insertOnDuplicateKey",answers:List[Answer])=>{
  def buildInsert(r: Answer): DBIO[Int] =
    sqlu"insert into answer (aid,bid,sbid,qid,ups,author,uid,nick,pub_time,content,good,hot,id,reply,pic,spider_time) values (${r.aid},${r.bid},${r.sbid},${r.qid},${r.ups},${r.author},${r.uid},${r.nick},${r.pub_time},${r.content},${r.good},${r.hot},${r.id},${r.reply},${r.pic},${r.spider_time}) ON DUPLICATE KEY UPDATE `aid`=values(aid),`bid`=values(bid),`sbid`=values(sbid),`qid`=values(qid),`ups`=values(ups),`author`=values(author),`uid`=values(uid),`nick`=values(nick),`pub_time`=values(pub_time),`content`=values(content),`good`=values(good),`hot`=values(hot),`id`=values(id),`reply`=values(reply),`pic`=values(pic),`spider_time`=values(spider_time)"
  val inserts: Seq[DBIO[Int]] = answers.map(buildInsert)
  val combined: DBIO[Seq[Int]] = DBIO.sequence(inserts)
  DEST_DB.run(combined).onComplete(data=>{
    println("insertOnDuplicateKey data result",data.get.mkString)
    if (data.isSuccess){
      println(data.get)
      val lastid=answers.last.id
      Sync.lastActor !("upsert",tablename,lastid)
    }else{
      //retry
      self !("insertOnDuplicateKey",answers)
    }
  })
}
我尝试在一个sql中使用sqlu,但错误可能是sqlu不提供字符串插值

这个演示不起作用

case ("insertOnDuplicateKeyError",answers:List[Answer])=>{
  def buildSql(execpre:String,values: String,execafter:String): DBIO[Int] = sqlu"$execpre $values $execafter"
  val execpre="insert into answer (aid,bid,sbid,qid,ups,author,uid,nick,pub_time,content,good,hot,id,reply,pic,spider_time)  values "
  val execafter=" ON DUPLICATE KEY UPDATE  `aid`=values(aid),`bid`=values(bid),`sbid`=values(sbid),`qid`=values(qid),`ups`=values(ups),`author`=values(author),`uid`=values(uid),`nick`=values(nick),`pub_time`=values(pub_time),`content`=values(content),`good`=values(good),`hot`=values(hot),`id`=values(id),`reply`=values(reply),`pic`=values(pic),`spider_time`=values(spider_time)"
  val valuesstr=answers.map(row=>("("+List(row.aid,row.bid,row.sbid,row.qid,row.ups,"'"+row.author+"'","'"+row.uid+"'","'"+row.nick+"'","'"+row.pub_time+"'","'"+row.content+"'",row.good,row.hot,row.id,row.reply,row.pic,"'"+row.spider_time+"'").mkString(",")+")")).mkString(",\n")
  val insertOrUpdateAction=DBIO.seq(
    buildSql(execpre,valuesstr,execafter)
  )
  DEST_DB.run(insertOrUpdateAction).onComplete(data=>{
    if (data.isSuccess){
      println("insertOnDuplicateKey data result",data)
      //retry
      val lastid=answers.last.id
      Sync.lastActor !("upsert",tablename,lastid)
    }else{
      self !("insertOnDuplicateKey2",answers)
    }
  })
}
使用scala slick的mysql同步工具

很酷。特别感谢你介绍了第二种技术。我不知道这一点,只是想仔细检查一下:第一种解决方案不是批量插入,是吗?它看起来是并行插入所有插入,而不是批量插入,不是吗?在第一种解决方案中,我是否将往返mysql服务器的时间保存为+1,用于
。在事务上
-仅此一项,我将M将插入50000行的性能从3分钟提高到1分钟。由于slick pg 0.16.1有一个
.insertOrUpdateAll
用于使用slick&PostgreSqlThank的批量升级,但我认为+++=不执行insertOrUpdate。我认为它只是插入,在我的情况下,如果有重复的行,将抛出完整性异常
case ("insertOnDuplicateKey",answers:List[Answer])=>{
  def buildInsert(r: Answer): DBIO[Int] =
    sqlu"insert into answer (aid,bid,sbid,qid,ups,author,uid,nick,pub_time,content,good,hot,id,reply,pic,spider_time) values (${r.aid},${r.bid},${r.sbid},${r.qid},${r.ups},${r.author},${r.uid},${r.nick},${r.pub_time},${r.content},${r.good},${r.hot},${r.id},${r.reply},${r.pic},${r.spider_time}) ON DUPLICATE KEY UPDATE `aid`=values(aid),`bid`=values(bid),`sbid`=values(sbid),`qid`=values(qid),`ups`=values(ups),`author`=values(author),`uid`=values(uid),`nick`=values(nick),`pub_time`=values(pub_time),`content`=values(content),`good`=values(good),`hot`=values(hot),`id`=values(id),`reply`=values(reply),`pic`=values(pic),`spider_time`=values(spider_time)"
  val inserts: Seq[DBIO[Int]] = answers.map(buildInsert)
  val combined: DBIO[Seq[Int]] = DBIO.sequence(inserts)
  DEST_DB.run(combined).onComplete(data=>{
    println("insertOnDuplicateKey data result",data.get.mkString)
    if (data.isSuccess){
      println(data.get)
      val lastid=answers.last.id
      Sync.lastActor !("upsert",tablename,lastid)
    }else{
      //retry
      self !("insertOnDuplicateKey",answers)
    }
  })
}
case ("insertOnDuplicateKeyError",answers:List[Answer])=>{
  def buildSql(execpre:String,values: String,execafter:String): DBIO[Int] = sqlu"$execpre $values $execafter"
  val execpre="insert into answer (aid,bid,sbid,qid,ups,author,uid,nick,pub_time,content,good,hot,id,reply,pic,spider_time)  values "
  val execafter=" ON DUPLICATE KEY UPDATE  `aid`=values(aid),`bid`=values(bid),`sbid`=values(sbid),`qid`=values(qid),`ups`=values(ups),`author`=values(author),`uid`=values(uid),`nick`=values(nick),`pub_time`=values(pub_time),`content`=values(content),`good`=values(good),`hot`=values(hot),`id`=values(id),`reply`=values(reply),`pic`=values(pic),`spider_time`=values(spider_time)"
  val valuesstr=answers.map(row=>("("+List(row.aid,row.bid,row.sbid,row.qid,row.ups,"'"+row.author+"'","'"+row.uid+"'","'"+row.nick+"'","'"+row.pub_time+"'","'"+row.content+"'",row.good,row.hot,row.id,row.reply,row.pic,"'"+row.spider_time+"'").mkString(",")+")")).mkString(",\n")
  val insertOrUpdateAction=DBIO.seq(
    buildSql(execpre,valuesstr,execafter)
  )
  DEST_DB.run(insertOrUpdateAction).onComplete(data=>{
    if (data.isSuccess){
      println("insertOnDuplicateKey data result",data)
      //retry
      val lastid=answers.last.id
      Sync.lastActor !("upsert",tablename,lastid)
    }else{
      self !("insertOnDuplicateKey2",answers)
    }
  })
}