将Python中的数据帧插入Snowflake
当我尝试使用writes_pandas函数将数据帧插入snowflake中的表时,只有空值出现在表格中。我试图创建一个不包含空值的表,但当我编写代码将dataframe插入Snowflake时,它报告了一个错误,该错误导致空值无法插入到存储非空值的表中。有人能帮我解决这个问题吗?我检查过了,Python中的数据帧不是空的将Python中的数据帧插入Snowflake,python,pandas,snowflake-cloud-data-platform,Python,Pandas,Snowflake Cloud Data Platform,当我尝试使用writes_pandas函数将数据帧插入snowflake中的表时,只有空值出现在表格中。我试图创建一个不包含空值的表,但当我编写代码将dataframe插入Snowflake时,它报告了一个错误,该错误导致空值无法插入到存储非空值的表中。有人能帮我解决这个问题吗?我检查过了,Python中的数据帧不是空的 import os import csv import pandas as pd import snowflake.connector import glob from
import os
import csv
import pandas as pd
import snowflake.connector
import glob
from snowflake.connector.pandas_tools import write_pandas
ctx = snowflake.connector.connect(
user='****',
password='****',
account='****'
)
cs= ctx.cursor()
def split(filehandler, keep_headers=True):
reader = csv.reader(filehandler, delimiter=',')
"""
Function split the file on row # basics
"""
# Variable declartion:
row_limit = 1000
output_name_template = 'output_%s.csv'
output_path = r'C:\Users\Nenad\Desktop\Data\mix'
current_piece = 1
current_out_path = os.path.join(
output_path,
output_name_template % current_piece
)
current_out_writer = csv.writer(open(current_out_path, 'w'), delimiter=',')
current_limit = row_limit
if keep_headers:
headers = next(reader)
current_out_writer.writerow(headers)
for i, row in enumerate(reader):
if i + 1 > current_limit:
current_piece += 1
current_limit = row_limit * current_piece
current_out_path = os.path.join(
output_path,
output_name_template % current_piece
)
current_out_writer = csv.writer(open(current_out_path, 'w'), delimiter=',')
if keep_headers:
current_out_writer.writerow(headers)
current_out_writer.writerow(row)
if __name__ == "__main__":
print("file split Begins")
split(open(r"C:\Users\Nenad\PycharmProjects\untitled15\bigtable_py.csv"))
print("File split Ends")
os.chdir(r'C:\Users\Nenad\Desktop\Data\mix')
file_extension=".csv"
all_filenames = [i for i in glob.glob(f"*{file_extension}")]
sql = "USE role ACCOUNTADMIN"
cs.execute(sql)
sql = "SELECT CURRENT_ROLE()"
cs.execute(sql)
for file in all_filenames:
df=pd.read_csv(file, delimiter=',')
cs.execute("USE DRAGANA")
write_pandas(ctx, df, 'TABLES')
你能分享你的代码或代码片段吗?我更改了帖子,现在你可以看到代码你的bigtable_py.csv有多大?尝试从它创建一个大约有100行的示例文件,只读取此文件,并在
df=pd.read_csv(file,delimiter=',')之后打印df。查看正在写入SnowflakeBigtable的内容。csv有69000行。我尝试了另一个较小的文件,它报告了相同的错误。您可以共享您的代码或代码片段吗,现在您可以看到代码bigtable_py.csv有多大?尝试从中创建一个大约有100行的示例文件,仅读取此文件,并在df=pd.read_csv(file,delimiter=',')之后打印df。
line查看正在写入SnowflakeBigtable的内容。csv有69000行。我尝试了另一个较小的文件,它报告了相同的错误