Python 我能';我不能让熊猫正确地联合我的数据帧

Python 我能';我不能让熊猫正确地联合我的数据帧,python,pandas,dataframe,Python,Pandas,Dataframe,我尝试将2个9列数据帧合并在一起。但是,熊猫们并没有对它们进行正常的垂直堆叠,而是继续尝试再添加9个空列。你知道怎么阻止这一切吗 输出如下所示: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,0,1,10,11,12,13,2,3,4,5,6,7,8,9 10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency ,,,,,,,,,,,

我尝试将2个9列数据帧合并在一起。但是,熊猫们并没有对它们进行正常的垂直堆叠,而是继续尝试再添加9个空列。你知道怎么阻止这一切吗

输出如下所示:

0,1,2,3,4,5,6,7,8,9,10,11,12,13,0,1,10,11,12,13,2,3,4,5,6,7,8,9
10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency ,,,,,,,,,,,,,,
10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.00,48,48,0,Ken,(903) 280-7878,UrTruckBroker ,,,,,,,,,,,,,,
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. ,,,,,,,,,,,,,,
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv ,,,,,,,,,,,,,,
10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency ,,,,,,,,,,,,,,

...


,,,,,,,,,,,,,,03/02/2021,Knapp,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Jackson,NE,Full,Flatbed / Step Deck,0.0,48.0,48.0
,,,,,,,,,,,,,,03/02/2021,Knapp,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Sterling,IL,Full,Flatbed / Step Deck,0.0,48.0,48.0
,,,,,,,,,,,,,,03/02/2021,Milwaukee,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Great Falls,MT,Full,Flatbed / Step Deck,0.0,45.0,48.0
,,,,,,,,,,,,,,03/02/2021,Algoma,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Pamplico,SC,Full,Flatbed / Step Deck,0.0,48.0,48.0


import pandas as pd
df_new = pd.DataFrame({'0': {0: '10/23/2020',
  1: '10/22/2020',
  2: '10/23/2020',
  3: '10/23/2020',
  4: '10/30/2020'},
 '1': {0: 'New_Castle',
  1: 'Wilmington',
  2: 'WILMINGTON',
  3: 'WILMINGTON',
  4: 'New_Castle'},
 '2': {0: 'DE', 1: 'DE', 2: 'DE', 3: 'DE', 4: 'DE'},
 '3': {0: 'Gary', 1: 'METHUEN', 2: 'METHUEN', 3: 'METHUEN', 4: 'Gary'},
 '4': {0: 'IN', 1: 'MA', 2: 'MA', 3: 'MA', 4: 'IN'},
 '5': {0: 'Full', 1: 'Full', 2: 'Full', 3: 'Full', 4: 'Full'},
 '6': {0: 'Flatbed',
  1: 'Flatbed_/_Step_Deck',
  2: 'Flatbed_w/Tarps',
  3: 'Flatbed_w/Tarps',
  4: 'Flatbed'},
 '7': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 945.0},
 '8': {0: 46, 1: 48, 2: 47, 3: 48, 4: 46},
 '9': {0: 48, 1: 48, 2: 1, 3: 1, 4: 48},
 '10': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
 '11': {0: 'Dispatch', 1: 'Ken', 2: 'Dispatch', 3: 'Dispatch', 4: 'Dispatch'},
 '12': {0: '(800)_488-1860',
  1: '(903)_280-7878',
  2: '(912)_748-3801',
  3: '(541)_826-4786',
  4: '(800)_488-1860'},
 '13': {0: 'Meadow_Lark_Agency_',
  1: 'UrTruckBroker_',
  2: 'DSV_Road_Inc._',
  3: 'Sureway_Transportation_Co_/_Anderson_Trucking_Serv_',
  4: 'Meadow_Lark_Agency_'}})


df_new = pd.DataFrame({'0': {0: '10/23/2020',
  1: '10/22/2020',
  2: '10/23/2020',
  3: '10/23/2020',
  4: '10/30/2020'},
 '1': {0: 'New_Castle',
  1: 'Wilmington',
  2: 'WILMINGTON',
  3: 'WILMINGTON',
  4: 'New_Castle'},
 '2': {0: 'DE', 1: 'DE', 2: 'DE', 3: 'DE', 4: 'DE'},
 '3': {0: 'Gary', 1: 'METHUEN', 2: 'METHUEN', 3: 'METHUEN', 4: 'Gary'},
 '4': {0: 'IN', 1: 'MA', 2: 'MA', 3: 'MA', 4: 'IN'},
 '5': {0: 'Full', 1: 'Full', 2: 'Full', 3: 'Full', 4: 'Full'},
 '6': {0: 'Flatbed',
  1: 'Flatbed_/_Step_Deck',
  2: 'Flatbed_w/Tarps',
  3: 'Flatbed_w/Tarps',
  4: 'Flatbed'},
 '7': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 945.0},
 '8': {0: 46, 1: 48, 2: 47, 3: 48, 4: 46},
 '9': {0: 48, 1: 48, 2: 1, 3: 1, 4: 48},
 '10': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
 '11': {0: 'Dispatch', 1: 'Ken', 2: 'Dispatch', 3: 'Dispatch', 4: 'Dispatch'},
 '12': {0: '(800)_488-1860',
  1: '(903)_280-7878',
  2: '(912)_748-3801',
  3: '(541)_826-4786',
  4: '(800)_488-1860'},
 '13': {0: 'Meadow_Lark_Agency_',
  1: 'UrTruckBroker_',
  2: 'DSV_Road_Inc._',
  3: 'Sureway_Transportation_Co_/_Anderson_Trucking_Serv_',
  4: 'Meadow_Lark_Agency_'}})

df_new.append(df_old, ignore_index=True)
#OR
pd.concat([df_new, df_old])
代码是一个获取数据的web请求,我将其保存到dataframe,然后与来自CSV的另一个dataframe连接。然后,我将所有这些内容保存回该csv:

this_csv = 'freights_trulos.csv'

try:
  old_df = pd.read_csv(this_csv)
except BaseException as e:
  print(e)
  old_df = pd.DataFrame()

state, equip = 'DE', 'Flat'
url = "https://backend-a.trulos.com/load-table/grab_loads.php?state=%s&equipment=%s" % (state, equip)

payload = {}
headers = {
...
}

response = requests.request("GET", url, headers=headers, data=payload)

# print(response.text)
parsed = json.loads(response.content)

data = [r[0:13] + [r[-4].split('<br/>')[-2].split('>')[-1]] for r in parsed]

df = pd.DataFrame(data=data)

if not old_df.empty:
  # concatenate old and new and remove duplicates
  
  
  # df.reset_index(drop=True, inplace=True)    
  # old_df.reset_index(drop=True, inplace=True)
  # df = pd.concat([old_df, df], ignore_index=True)   <--- CONCAT HAS SAME ISSUES AS APPEND
  df = df.append(old_df, ignore_index=True)
  # remove duplicates on cols

df.drop_duplicates()
df.to_csv(this_csv, index=False)
旧_df至csv

0,1,2,3,4,5,6,7,8,9,10,11,12,13
10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.0,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 
10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.0,48,48,0,Ken,(903) 280-7878,UrTruckBroker 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.0,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.0,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv 
10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.0,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 
0,1,2,3,4,5,6,7,8,9,10,11,12,13
10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 
10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.00,48,48,0,Ken,(903) 280-7878,UrTruckBroker 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv 
10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 
新的_df到csv

0,1,2,3,4,5,6,7,8,9,10,11,12,13
10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.0,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 
10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.0,48,48,0,Ken,(903) 280-7878,UrTruckBroker 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.0,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.0,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv 
10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.0,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 
0,1,2,3,4,5,6,7,8,9,10,11,12,13
10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 
10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.00,48,48,0,Ken,(903) 280-7878,UrTruckBroker 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. 
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv 
10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 

我想问题可能是你如何读取数据如果我将你的样本数据复制到excel,用逗号分割,然后导入到pandas,一切都很好。另外,如果我在逗号和空格上拆分,我还有+9个额外的列。因此,您可以尝试在创建数据帧之前替换所有空白来进行调试

我还使用了您的示例数据,如果我这样初始化它,它的工作端对我来说很好:

0,1,2,3,4,5,6,7,8,9,10,11,12,13,0,1,10,11,12,13,2,3,4,5,6,7,8,9
10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency ,,,,,,,,,,,,,,
10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.00,48,48,0,Ken,(903) 280-7878,UrTruckBroker ,,,,,,,,,,,,,,
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. ,,,,,,,,,,,,,,
10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv ,,,,,,,,,,,,,,
10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency ,,,,,,,,,,,,,,

...


,,,,,,,,,,,,,,03/02/2021,Knapp,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Jackson,NE,Full,Flatbed / Step Deck,0.0,48.0,48.0
,,,,,,,,,,,,,,03/02/2021,Knapp,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Sterling,IL,Full,Flatbed / Step Deck,0.0,48.0,48.0
,,,,,,,,,,,,,,03/02/2021,Milwaukee,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Great Falls,MT,Full,Flatbed / Step Deck,0.0,45.0,48.0
,,,,,,,,,,,,,,03/02/2021,Algoma,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Pamplico,SC,Full,Flatbed / Step Deck,0.0,48.0,48.0


import pandas as pd
df_new = pd.DataFrame({'0': {0: '10/23/2020',
  1: '10/22/2020',
  2: '10/23/2020',
  3: '10/23/2020',
  4: '10/30/2020'},
 '1': {0: 'New_Castle',
  1: 'Wilmington',
  2: 'WILMINGTON',
  3: 'WILMINGTON',
  4: 'New_Castle'},
 '2': {0: 'DE', 1: 'DE', 2: 'DE', 3: 'DE', 4: 'DE'},
 '3': {0: 'Gary', 1: 'METHUEN', 2: 'METHUEN', 3: 'METHUEN', 4: 'Gary'},
 '4': {0: 'IN', 1: 'MA', 2: 'MA', 3: 'MA', 4: 'IN'},
 '5': {0: 'Full', 1: 'Full', 2: 'Full', 3: 'Full', 4: 'Full'},
 '6': {0: 'Flatbed',
  1: 'Flatbed_/_Step_Deck',
  2: 'Flatbed_w/Tarps',
  3: 'Flatbed_w/Tarps',
  4: 'Flatbed'},
 '7': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 945.0},
 '8': {0: 46, 1: 48, 2: 47, 3: 48, 4: 46},
 '9': {0: 48, 1: 48, 2: 1, 3: 1, 4: 48},
 '10': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
 '11': {0: 'Dispatch', 1: 'Ken', 2: 'Dispatch', 3: 'Dispatch', 4: 'Dispatch'},
 '12': {0: '(800)_488-1860',
  1: '(903)_280-7878',
  2: '(912)_748-3801',
  3: '(541)_826-4786',
  4: '(800)_488-1860'},
 '13': {0: 'Meadow_Lark_Agency_',
  1: 'UrTruckBroker_',
  2: 'DSV_Road_Inc._',
  3: 'Sureway_Transportation_Co_/_Anderson_Trucking_Serv_',
  4: 'Meadow_Lark_Agency_'}})


df_new = pd.DataFrame({'0': {0: '10/23/2020',
  1: '10/22/2020',
  2: '10/23/2020',
  3: '10/23/2020',
  4: '10/30/2020'},
 '1': {0: 'New_Castle',
  1: 'Wilmington',
  2: 'WILMINGTON',
  3: 'WILMINGTON',
  4: 'New_Castle'},
 '2': {0: 'DE', 1: 'DE', 2: 'DE', 3: 'DE', 4: 'DE'},
 '3': {0: 'Gary', 1: 'METHUEN', 2: 'METHUEN', 3: 'METHUEN', 4: 'Gary'},
 '4': {0: 'IN', 1: 'MA', 2: 'MA', 3: 'MA', 4: 'IN'},
 '5': {0: 'Full', 1: 'Full', 2: 'Full', 3: 'Full', 4: 'Full'},
 '6': {0: 'Flatbed',
  1: 'Flatbed_/_Step_Deck',
  2: 'Flatbed_w/Tarps',
  3: 'Flatbed_w/Tarps',
  4: 'Flatbed'},
 '7': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 945.0},
 '8': {0: 46, 1: 48, 2: 47, 3: 48, 4: 46},
 '9': {0: 48, 1: 48, 2: 1, 3: 1, 4: 48},
 '10': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
 '11': {0: 'Dispatch', 1: 'Ken', 2: 'Dispatch', 3: 'Dispatch', 4: 'Dispatch'},
 '12': {0: '(800)_488-1860',
  1: '(903)_280-7878',
  2: '(912)_748-3801',
  3: '(541)_826-4786',
  4: '(800)_488-1860'},
 '13': {0: 'Meadow_Lark_Agency_',
  1: 'UrTruckBroker_',
  2: 'DSV_Road_Inc._',
  3: 'Sureway_Transportation_Co_/_Anderson_Trucking_Serv_',
  4: 'Meadow_Lark_Agency_'}})

df_new.append(df_old, ignore_index=True)
#OR
pd.concat([df_new, df_old])

请提供df的示例输入,旧的_dfI未发现df的任何明显问题。追加调用。你能检查并发布
数据的内容吗?@Andreas我看到第6列的数据类型发生了改变或以某种方式发生了改变(可能还有其他列)。原来是一个小数点,现在是两个。这是导致问题的原因吗?如果是,您知道我如何修复吗?