Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用pyspark从文本日志文件的特定部分创建数据帧_Python_Apache Spark_Pyspark_Spark Dataframe - Fatal编程技术网

Python 如何使用pyspark从文本日志文件的特定部分创建数据帧

Python 如何使用pyspark从文本日志文件的特定部分创建数据帧,python,apache-spark,pyspark,spark-dataframe,Python,Apache Spark,Pyspark,Spark Dataframe,我是Pypark的新手。。。 我有一个大日志文件,其中包含如下数据: sfdfd FSDFDFFDHFGJKFJKYKLJK,艾里格特,特格特里尤,. SGGGFSDF ========================================== Roll Name class ========================================== 1 avb wer21g2 ----------------------------------

我是Pypark的新手。。。 我有一个大日志文件,其中包含如下数据:

sfdfd
FSDFDFFDHFGJKFJKYKLJK,艾里格特,特格特里尤,.
SGGGFSDF

==========================================  
Roll Name   class  
==========================================  
1     avb    wer21g2
------------------------------------------  

===========================================  
empcode   Emnname   Dept   Address   
===========================================  
12d      sf        sdf22    dghsjf  
asf2    asdfw2     df21df   fsfsfg  
dsf21   sdf2       df2      sdgfsgf  
------------------------------------------- 
现在我想用Spark和python(Pyspark)将这个文件拆分成多个RDD/Dataframe,我可以用APIHadoopFile在Scala中完成,现在我想用Pyspark完成。有人能帮我吗

预计产量为:

Roll Name clas  
1   avb   wer21g2  


empcode   Emnname   Dept   Address  
12d      sf        sdf22    dghsjf  
asf2    asdfw2     df21df   fsfsfg  
dsf21   sdf2       df2      sdgfsgf  
这是我尝试过的代码:

with open(path) as f:
    out = []
    for line in f:
        if line.rstrip() == findStr:
            tmp = []
            tmp.append(line)
            for line in f:
               # print(line)
                if line.rstrip() == EndStr:
                    out.append(tmp)
                    break
                tmp.append(line)
f.close()

SMN_df = spark.createDataFrame(tmp, StringType()).show(truncate=False)
我能够创建数据帧,但没有得到预期的输出。有人能帮我吗

有关更多详细信息,请参阅附加的屏幕截图

数据集

from pyspark.sql import SparkSession
import re


spark=SparkSession.Builder.config("spark.sql.warehouse.dir","file://C:/temp")
.appName("SparkSQL").getOrCreate()

path="C:/Users/Rudrashis/Desktop/test2.txt"
Txtpath="L:/SparkScala/test.csv"
EndStr="---------------------------------"
FilterStr="================================="
def prepareDataset(Findstr):
with open(path) as f:
    out=[]
    for line in f:
        if line.rstrip()==Findstr:
            tmp=[]
            tmp.append(re.sub("\s+",",",line.strip()))
            for line in f:
                if line.rstrip()==EndStr:
                    out.append(tmp)
                    break

                tmp.append(re.sub("\s+",",",line.strip()))
            return (tmp)
f.close()

def Makesv(Lstcommon):
with open("test.csv","w")as outfile:
    for entries in map(str.strip(),Lstcommon):
        outfile.write(entries)
outfile.close()

###For 1st block################
LstStudent=[]
LstStudent=prepareDataset("Roll  Name  Class")
LstStudent.list(filter(lambda a: a!=FilterStr,LstStudent))
createStudent=Makesv(LstStudent)

Student_DF=spark.read.format('com.databricks.spark.csv')
.options(header="true",inferschema="true").load(Txtpath)
Student_DF.show(truncate=False)
######### end 1st block####

#####2nd block start####
LstEmp=[]
LstEmp=prepareDataset("empcode   Emnname   Dept   Address")
LstEmp.list(filter(lambda a: a!=FilterStr,LstEmp))
CreateEmp=Makesv(LstEmp)

 Emp_DF=spark.read.format('com.databricks.spark.csv')
.options(header="true",inferschema="true").load(Txtpath)
Emp_DF.show(truncate=False)

##### end of 2nd block#####