Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/django/22.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Heroku中处理相当大的python字典?_Python_Django_Dictionary - Fatal编程技术网

在Heroku中处理相当大的python字典?

在Heroku中处理相当大的python字典?,python,django,dictionary,Python,Django,Dictionary,在用熊猫做了一些计算之后,我增加了大约17000条记录,我的字典太大了,无法处理 为了让您了解情况,字典中的一条记录如下所示: [{'ticker': '24STOR', 'stock': '24Storage AB (publ)', 'exchange__exchange_code': 'ST', 'earnings_yield': Decimal('0.0000'), 'roic': Decimal('0.0000')}] 我从熊猫数据帧转换而来: ranked_companies = d

在用熊猫做了一些计算之后,我增加了大约17000条记录,我的字典太大了,无法处理

为了让您了解情况,字典中的一条记录如下所示:

[{'ticker': '24STOR', 'stock': '24Storage AB (publ)', 'exchange__exchange_code': 'ST', 'earnings_yield': Decimal('0.0000'), 'roic': Decimal('0.0000')}]
我从熊猫数据帧转换而来:

ranked_companies = df.to_dict(orient="records") # this is the dictionary with 17000 records
然后,我循环介绍一下:

 stocks_to_upsert = []
 for company in ranked_companies:
        stocks_to_upsert.append(
            Stock(
                ticker=company["ticker"],
                stock=company["stock"],
                master_id=company["master_id"],
                exchange=Exchange.objects.get(exchange_code=company["exchange__exchange_code"]),
                earnings_yield=company["earnings_yield"],
                roic=company["roic"],
                roic_rank=company["roic_rank"],
                ey_rank=company["ey_rank"],
                sum_rank=company["sum_rank"],
                latest_report=datetime.strptime(company["latest_report"], "%Y-%m-%d").date()
            )
         )
然后使用django包批量插入这些记录:

 Stock.objects.bulk_update_or_create(
            stocks_to_upsert, ["ticker", "earnings_yield", "roic", "roic_rank", "ey_rank", "sum_rank", "latest_report"], match_field="master_id"
        )
我的heroku内存已经用完了,我需要以某种方式优化
排名公司
字典(我想这就是问题所在)。如果我限制字典,一切都正常:

df.head(100).to_dict(orient="records") # this executes just fine
df.to_dict(orient="records") # this makes me run out of memory in my heroku logs
# error looks like this: Error R14 (Memory quota exceeded)
你知道怎么做吗