Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/perl/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Perl模块sapnwrfc使用RFC_READ_table从大型SAP表中检索数据_Perl_Sap - Fatal编程技术网

Perl模块sapnwrfc使用RFC_READ_table从大型SAP表中检索数据

Perl模块sapnwrfc使用RFC_READ_table从大型SAP表中检索数据,perl,sap,Perl,Sap,我想使用Perl模块从一个大SAP表(几百万个条目)中检索数据,将其导出到CSV文件 其想法是使用功能模块RFC_READ_表,如下所示: # Connect to SAP system # [...] my $rd = $conn->function_lookup("RFC_READ_TABLE"); my $rc = $rd->create_function_call; $rc->QUERY_TABLE("/PLMB/AUTH_OBSID"); $rc->DELIMI

我想使用Perl模块从一个大SAP表(几百万个条目)中检索数据,将其导出到CSV文件

其想法是使用功能模块RFC_READ_表,如下所示:

# Connect to SAP system
# [...]
my $rd = $conn->function_lookup("RFC_READ_TABLE");
my $rc = $rd->create_function_call;
$rc->QUERY_TABLE("/PLMB/AUTH_OBSID");
$rc->DELIMITER("@");
$rc->FIELDS([ {'FIELDNAME' => 'OBJECT_ID'}, {'FIELDNAME' => 'SID'} ]);
$rc->OPTIONS([{'TEXT' => 'OBJ_TYPE = \'PLM_DIR\''}]);  
$rc->invoke;

# Iterate over $rc-DATA and export it to CSV file
# [...]
$conn->disconnect;
问题是,由于检索到的数据超过了现有内存,脚本终止时出现内存不足错误


有没有可能避免像分页机制或类似的问题?

这不是RFC\u READ\u表的目的。您将不得不求助于其他一些提取方法。

基于上的Python代码片段,我找到了解决问题的方法

使用功能模块RFC_READ_module的导入参数ROWSKIPS和ROWCOUNT,我可以获取具有行块的数据:

# Meaning of ROWSKIPS and ROWCOUNT as parameters of function module RFC_READ_TABLE:
#
# For example, ROWSKIPS = 0, ROWCOUNT = 500 fetches first 500 records, 
# then ROWSKIPS = 501, ROWCOUNT = 500 gets next 500 records, and so on. 
# If left at 0, then no chunking is implemented. The maximum value to either of these fields is 999999.
my $RecordsCounter = 1;
my $Iteration = 0;
my $FetchSize = 1000;
my $RowSkips = 0;
my $RowCount = 1000;

# Open RFC connection
my $conn = SAPNW::Rfc->rfc_connect;

# Reference to function module call
my $rd = $conn->function_lookup("RFC_READ_TABLE");

# Reference to later function module call
my $rc;

# Loop to get data out of table in several chunks
while ($RecordsCounter > 0){

    # Calculate the already retrieved rows that need to be skipped
    $RowSkips = $Iteration * $FetchSize;

    # Reference to function module call
    $rc = $rd->create_function_call;

    # Table where data needs to be extracted
    $rc->QUERY_TABLE("/PLMB/AUTH_OBSID");

    # Delimeter between columns
    $rc->DELIMITER("@");

    # Columns to be retrieved
    $rc->FIELDS([ {'FIELDNAME' => 'OBJECT_ID'}, {'FIELDNAME' => 'SID'} ]);

    # SELECT criteria
    $rc->OPTIONS([{'TEXT' => 'OBJ_TYPE = \'PLM_DIR\''}]);

    # Define number of data to be retrieved
    $rc->ROWCOUNT($RowCount);

    # Define number of rows to be skipped that have been retrieved in the previous fetch
    $rc->ROWSKIPS($RowSkips);

    # Function call
    $rc->invoke;

    $Iteration++;

    # Data retrieved        
    if(defined $rc->DATA->[0]){ 

      print "Fetch $Iteration\n";

      foreach my $TableLine ( @{ $rc->DATA } ) {
        print "$TableLine->{WA}\n";
      }

  }

  # No more data to retrieve
  else{

    # Leave loop
    $RecordsCounter = 0;
  }

}

# Disconnect RFC connection
$conn->disconnect;

欢迎使用堆栈溢出和Perl标记。请回答您的问题,并附上您正在使用的模块的链接。可能是这样,但它有一个非常罕见的命名方案,所以最好是安全的,然后抱歉。它是否告诉您错误发生在哪一行?请包含准确的错误消息。迭代数据的部分可能也很有趣。我假设,如果你正确地逐行读取数据,它不会占用你所有的内存。