Dataframe Pyspark udf(BeautifulSoup)及其在数据帧中的应用

Dataframe Pyspark udf(BeautifulSoup)及其在数据帧中的应用,dataframe,apache-spark,beautifulsoup,pyspark,user-defined-functions,Dataframe,Apache Spark,Beautifulsoup,Pyspark,User Defined Functions,我试图定义我的udf来清除标记中的html文本。 以下代码可以正常工作: from bs4 import BeautifulSoup from pyspark.sql.functions import udf text = '<p>Tervetuloa leikkimään, laulamaan, loruilemaan, liikkumaan, taiteilemaan ja tutkimaan leikkipuiston<br>perheaamuun! Leikki

我试图定义我的udf来清除标记中的html文本。 以下代码可以正常工作:

from bs4 import BeautifulSoup
from pyspark.sql.functions import udf

text = '<p>Tervetuloa leikkimään, laulamaan, loruilemaan, liikkumaan, taiteilemaan ja tutkimaan leikkipuiston<br>perheaamuun! Leikki- ja toimintaympäristö mahdollistavat vanhemman ja lapsen yhteisen puuhan ja leikin<br>ja lapset saavat leikkiseuraa.<br>Vanhemmilla on mahdollisuus tutustua muihin lapsiperheisiin ja lapset saavat leikkiseuraa. Vanhemmat ja<br>lapset voivat osallistua toiminnan suunnittel'

text_clr = BeautifulSoup(text, 'html.parser').get_text()
print(text_clr)
然后我定义我的udf:

from bs4 import BeautifulSoup
from pyspark.sql.functions import udf

spark.udf.register("soup_udf",
                   lambda text_clr: BeautifulSoup(text, 'html.parser').get_text() if not text is None else 'NA',
                   "string")

text1 = '<p>Tervetuloa leikkimään, laulamaan, loruilemaan, liikkumaan, taiteilemaan ja tutkimaan leikkipuiston<br>perheaamuun! Leikki- ja toimintaympäristö mahdollistavat vanhemman ja lapsen yhteisen puuhan ja leikin<br>ja lapset saavat leikkiseuraa.<br>Vanhemmilla on mahdollisuus tutustua muihin lapsiperheisiin ja lapset saavat leikkiseuraa. Vanhemmat ja<br>lapset voivat osallistua toiminnan suunnittel'

text_clr1 = soup_udf(text1)
print(text_clr1)
因此,我收到一条很长的错误消息,我不理解:-(

DataFrame[id:string,desc\u clr:string]
---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在里面
20#显示(df.select(“id”),squared_udf(“id”)。别名(“id_squared”))
21显示(dfAll4.选择(“id”,soup_udf(“desc”).alias(“desc_clr”)).distinct())
--->22 dfAll4.选择(“id”,soup_udf(“desc”).alias(“desc_clr”)).distinct().show(10,truncate=200)
23#dfAll4.withColumn(“desc#u clr”,soup_udf(dfAll4.desc))。选择(“desc#u clr”).distinct()。显示(10,truncate=200)
24#dfAll4.选择(“desc”,soup_udf(dfAll4.desc).别名(“desc#u clr”)).distinct().show(10,truncate=200)
/show中的usr/lib/spark-2.4.4/python/pyspark/sql/dataframe.py(self,n,truncate,vertical)
380打印(自选jdf.showString(n,20,垂直))
381其他:
-->382打印(self._jdf.showString(n,int(截断),垂直))
383
384定义报告(自我):
/usr/lib/spark-2.4.4/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __调用(self,*args)
1255 answer=self.gateway\u client.send\u命令(command)
1256返回值=获取返回值(
->1257应答,self.gateway_客户端,self.target_id,self.name)
1258
1259对于临时参数中的临时参数:
/装饰中的usr/lib/spark-2.4.4/python/pyspark/sql/utils.py(*a,**kw)
61 def装饰(*a,**千瓦):
62尝试:
--->63返回f(*a,**kw)
64除py4j.protocol.Py4JJavaError外的其他错误为e:
65 s=e.java_exception.toString()
/获取返回值中的usr/lib/spark-2.4.4/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py(答案、网关客户端、目标id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用o3334.showString时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段426.0中的任务2失败1次,最近的失败:阶段426.0中的任务2.0丢失(TID 28703,localhost,executor driver):net.razorvine.pickle.pickle异常:构造ClassDict(对于bs4.element.NavigableString)的参数应为零
位于net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
net.razorvine.pickle.Unpickler.load\u reduce(Unpickler.java:707)
位于net.razorvine.pickle.Unpickler.load_newobj(Unpickler.java:711)
位于net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:259)
位于net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
加载net.razorvine.pickle.Unpickler.load(Unpickler.java:112)
位于org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:90)
位于org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:89)
位于scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
位于scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
位于scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
位于scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
位于org.apache.spark.sql.catalyst.expressions.GeneratedClass$generatorforcodegenstage16.agg_doAggregateWithKeys_0$(未知源)
位于org.apache.spark.sql.catalyst.expressions.GeneratedClass$GenerateEditorForCodeGenStage16.processNext(未知源)
位于org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
位于org.apache.spark.sql.execution.whisttagecodegenexec$$anonfun$13$$anon$1.hasNext(whisttagecodegenexec.scala:636)
位于scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
位于org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)上
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)上
位于org.apache.spark.scheduler.Task.run(Task.scala:123)
位于org.apache.spark.executor.executor$TaskRunner$$anonfun$10.apply(executor.scala:408)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:414)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
位于scala.Option.foreach(Option.scala:257)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
在org.apache.spar上
from bs4 import BeautifulSoup
from pyspark.sql.functions import udf

spark.udf.register("soup_udf",
                   lambda text_clr: BeautifulSoup(text, 'html.parser').get_text() if not text is None else 'NA',
                   "string")

text1 = '<p>Tervetuloa leikkimään, laulamaan, loruilemaan, liikkumaan, taiteilemaan ja tutkimaan leikkipuiston<br>perheaamuun! Leikki- ja toimintaympäristö mahdollistavat vanhemman ja lapsen yhteisen puuhan ja leikin<br>ja lapset saavat leikkiseuraa.<br>Vanhemmilla on mahdollisuus tutustua muihin lapsiperheisiin ja lapset saavat leikkiseuraa. Vanhemmat ja<br>lapset voivat osallistua toiminnan suunnittel'

text_clr1 = soup_udf(text1)
print(text_clr1)
display(dfAll4.select("id", soup_udf("desc").alias("desc_clr")).distinct())
dfAll4.select("id", soup_udf("desc").alias("desc_clr")).distinct().show(10,truncate=200)
DataFrame[id: string, desc_clr: string]

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-118-aa5fcd68d914> in <module>
     20 #display(df.select("id", squared_udf("id").alias("id_squared")))
     21 display(dfAll4.select("id", soup_udf("desc").alias("desc_clr")).distinct())
---> 22 dfAll4.select("id", soup_udf("desc").alias("desc_clr")).distinct().show(10,truncate=200)
     23 #dfAll4.withColumn("desc_clr", soup_udf(dfAll4.desc)).select("desc_clr").distinct().show(10, truncate=200)
     24 #dfAll4.select("desc", soup_udf(dfAll4.desc).alias("desc_clr")).distinct().show(10, truncate=200)

/usr/lib/spark-2.4.4/python/pyspark/sql/dataframe.py in show(self, n, truncate, vertical)
    380             print(self._jdf.showString(n, 20, vertical))
    381         else:
--> 382             print(self._jdf.showString(n, int(truncate), vertical))
    383 
    384     def __repr__(self):

/usr/lib/spark-2.4.4/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/lib/spark-2.4.4/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/lib/spark-2.4.4/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o3334.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 426.0 failed 1 times, most recent failure: Lost task 2.0 in stage 426.0 (TID 28703, localhost, executor driver): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for bs4.element.NavigableString)
    at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
    at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
    at net.razorvine.pickle.Unpickler.load_newobj(Unpickler.java:711)
    at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:259)
    at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
    at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:90)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:89)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage16.agg_doAggregateWithKeys_0$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage16.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:365)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3389)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
    at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2550)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2764)
    at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:291)
    at sun.reflect.GeneratedMethodAccessor81.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for bs4.element.NavigableString)
    at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
    at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
    at net.razorvine.pickle.Unpickler.load_newobj(Unpickler.java:711)
    at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:259)
    at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
    at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:90)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:89)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage16.agg_doAggregateWithKeys_0$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage16.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more
from simplified_scrapy.simplified_doc import SimplifiedDoc 
text1 = '<p>Tervetuloa leikkimään, laulamaan, loruilemaan, liikkumaan, taiteilemaan ja tutkimaan leikkipuiston<br>perheaamuun! Leikki- ja toimintaympäristö mahdollistavat vanhemman ja lapsen yhteisen puuhan ja leikin<br>ja lapset saavat leikkiseuraa.<br>Vanhemmilla on mahdollisuus tutustua muihin lapsiperheisiin ja lapset saavat leikkiseuraa. Vanhemmat ja<br>lapset voivat osallistua toiminnan suunnittel'

doc = SimplifiedDoc(text1)
print (doc.text)
Tervetuloa leikkimään, laulamaan, loruilemaan, liikkumaan, taiteilemaan ja tutkimaan leikkipuistonperheaamuun! Leikki- ja toimintaympäristö mahdollistavat vanhemman ja lapsen yhteisen puuhan ja leikinja lapset saavat leikkiseuraa.Vanhemmilla on mahdollisuus tutustua muihin lapsiperheisiin ja lapset saavat leikkiseuraa. Vanhemmat jalapset voivat osallistua toiminnan suunnittel
from pyspark.sql.functions import udf
from simplified_scrapy.simplified_doc import SimplifiedDoc 

#define and register udf

def text_simple_udf(text_in): 
    return SimplifiedDoc(text_in).text

spark.udf.register("text_simple_udf", text_simple_udf)

#test the udf with a sample from the dataframe:

tst = text_simple_udf('<p>Tervetuloa leikkimään, laulamaan, loruilemaan, liikkumaan, taiteilemaan ja tutkimaan leikkipuiston<br>perheaamuun! Leikki- ja toimintaympäristö mahdollistavat vanhemman ja lapsen yhteisen puuhan ja leikin<br>ja lapset saavat leikkiseuraa.<br>Vanhemmilla on mahdollisuus tutustua muihin lapsiperheisiin ja lapset saavat leikkiseuraa. Vanhemmat ja<br>lapset voivat osallistua toiminnan suunnittel')
#print(tst)

# apply in dataframe:
dfAll4.selectExpr("desc", "(text_simple_udf(desc)) as desc_simpl").show(10, truncate=50)

#result:
+--------------------+--------------------+
|                desc|          desc_simpl|
+--------------------+--------------------+
|<p>Tervetuloa lei...|Tervetuloa leikki...|
|<p>Tervetuloa lei...|Tervetuloa leikki...|
|<p>Leikkipuiston ...|Leikkipuiston vau...|
|<p>Kaupunginvaltu...|Kaupunginvaltuust...|
|<p>Kaupunginvaltu...|Kaupunginvaltuust...|
|<p>Pienet jalat l...|Pienet jalat liik...|
|<p>Pienet jalat l...|Pienet jalat liik...|
|<p>Pienet jalat l...|Pienet jalat liik...|
|<p>Tervetuloa lei...|Tervetuloa leikki...|
|<p>Tervetuloa lei...|Tervetuloa leikki...|
+--------------------+--------------------+
only showing top 10 rows