Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/node.js/41.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Javascript 使用木偶演员AWS Lambda迭代多个有效负载并拍摄多个屏幕截图_Javascript_Node.js_Amazon Web Services_Aws Lambda_Chromium - Fatal编程技术网

Javascript 使用木偶演员AWS Lambda迭代多个有效负载并拍摄多个屏幕截图

Javascript 使用木偶演员AWS Lambda迭代多个有效负载并拍摄多个屏幕截图,javascript,node.js,amazon-web-services,aws-lambda,chromium,Javascript,Node.js,Amazon Web Services,Aws Lambda,Chromium,我目前正在使用下面的傀儡AWS Lambda层来抓取30个URL,并在S3中创建和保存屏幕截图。目前,我发送了30个单独的有效负载,因此运行了30个AWS Lambda函数。 每个JSON负载都包含一个URL和一个图像文件名,每2-3秒通过POST请求发送到API网关。列表中的前6或9个Lambda函数似乎运行正常,然后它们开始失败,因为浏览器已断开连接,导航失败如AWS Cloudwatch中所报告 因此,我正在寻找另一种解决方案,如何编辑下面的代码,通过处理单个JSON有效负载数组,批量截

我目前正在使用下面的傀儡AWS Lambda层来抓取30个URL,并在S3中创建和保存屏幕截图。目前,我发送了30个单独的有效负载,因此运行了30个AWS Lambda函数。

每个JSON负载都包含一个URL和一个图像文件名,每2-3秒通过POST请求发送到API网关。列表中的前6或9个Lambda函数似乎运行正常,然后它们开始失败,因为浏览器已断开连接,
导航失败如AWS Cloudwatch中所报告

因此,我正在寻找另一种解决方案,如何编辑下面的代码,通过处理单个JSON有效负载数组,批量截图一组30个URL?(例如用于循环等)

以下是我当前用于生成单个AWS Lambda屏幕截图并发送到S3的代码:

// src/capture.js

// this module will be provided by the layer
const chromeLambda = require("chrome-aws-lambda");

// aws-sdk is always preinstalled in AWS Lambda in all Node.js runtimes
const S3Client = require("aws-sdk/clients/s3");

process.setMaxListeners(0) // <== Important line - Fix MaxListerners Error

// create an S3 client
const s3 = new S3Client({ region: process.env.S3_REGION });

// default browser viewport size
const defaultViewport = {
  width: 1920,
  height: 1080
};

// here starts our function!
exports.handler = async event => {

  // launch a headless browser
  const browser = await chromeLambda.puppeteer.launch({
    args: chromeLambda.args,
    executablePath: await chromeLambda.executablePath,
    defaultViewport
  });
  console.log("Event URL string is ", event.url)

  const url = event.url;
  const domain = (new URL(url)).hostname.replace('www.', '');

  // open a new tab
  const page = await browser.newPage();

  // navigate to the page
  await page.goto(event.url);

  // take a screenshot
  const buffer = await page.screenshot()

  // upload the image using the current timestamp as filename
  const result = await s3
    .upload({
      Bucket: process.env.S3_BUCKET,
      Key: domain + `.png`,
      Body: buffer,
      ContentType: "image/png",
      ACL: "public-read"
    })
    .promise();

  // return the uploaded image url
  return { url: result.Location };
};

我试图复制该问题,并将代码修改为使用循环

在研究这个问题时,我发现有几点值得指出:

  • lambda需要大量的RAM(在我的测试中至少1GB,但更好)。使用少量RAM会导致故障
  • lambda timeout必须很大才能处理屏幕截图的大量URL
  • JSON负载中的
    img
    根本没有使用。我没有修改这个行为,因为我不知道这是否是故意的
  • 当运行async for loop和/或未关闭打开的页面时,观察到与您类似的错误
  • 修改返回值以输出s3URL的数组
  • 未定义的URL
修改代码 以下是使用
nodejs12.x
runtime在我的测试中运行的修改后的代码:

// src/capture.js

var URL = require('url').URL;

// this module will be provided by the layer
const chromeLambda = require("chrome-aws-lambda");

// aws-sdk is always preinstalled in AWS Lambda in all Node.js runtimes
const S3Client = require("aws-sdk/clients/s3");

process.setMaxListeners(0) // <== Important line - Fix MaxListerners Error

// create an S3 client
const s3 = new S3Client({ region: process.env.S3_REGION });

// default browser viewport size
const defaultViewport = {
  width: 1920,
  height: 1080
};

// here starts our function!
exports.handler = async event => {

  // launch a headless browser
  const browser = await chromeLambda.puppeteer.launch({
    args: chromeLambda.args,
    executablePath: await chromeLambda.executablePath,
    defaultViewport
  });
  
  const s3_urls = [];

  for (const e of event) {
    console.log(e);

    console.log("Event URL string is ", e.url)

    const url = e.url;
    const domain = (new URL(url)).hostname.replace('www.', '');

    // open a new tab
    const page = await browser.newPage();

    // navigate to the page
    await page.goto(e.url);

    // take a screenshot
    const buffer = await page.screenshot()

    // upload the image using the current timestamp as filename
    const result = await s3
      .upload({
        Bucket: process.env.S3_BUCKET,
        Key: domain + `.png`,
        Body: buffer,
        ContentType: "image/png",
        ACL: "public-read"
      })
      .promise();
      
      await page.close();
      
      s3_urls.push({ url: result.Location });
      
  }
  
  await browser.close();

  // return the uploaded image url
  return s3_urls;
};         

S3中的示例输出

您的示例有效负载对我来说也很好。但是,较大的不同URL负载会导致“导航”错误再次发生。我曾希望在答案中实现一些封闭页面代码来帮助解决这个问题。但是到目前为止,数组代码部分非常好,只需要它来处理12个或更多的有效负载。@sigur7很高兴答案很有用。我在更大的有效载荷上做了测试,它们为我工作。你有具体的例子吗?也许它与你尝试的特定网站有关,而不是有效负载列表的大小?你是对的,很奇怪,但是一批新的URL(23个)都在最后有前斜杠,完成得很好。伟大的作品
// src/capture.js

var URL = require('url').URL;

// this module will be provided by the layer
const chromeLambda = require("chrome-aws-lambda");

// aws-sdk is always preinstalled in AWS Lambda in all Node.js runtimes
const S3Client = require("aws-sdk/clients/s3");

process.setMaxListeners(0) // <== Important line - Fix MaxListerners Error

// create an S3 client
const s3 = new S3Client({ region: process.env.S3_REGION });

// default browser viewport size
const defaultViewport = {
  width: 1920,
  height: 1080
};

// here starts our function!
exports.handler = async event => {

  // launch a headless browser
  const browser = await chromeLambda.puppeteer.launch({
    args: chromeLambda.args,
    executablePath: await chromeLambda.executablePath,
    defaultViewport
  });
  
  const s3_urls = [];

  for (const e of event) {
    console.log(e);

    console.log("Event URL string is ", e.url)

    const url = e.url;
    const domain = (new URL(url)).hostname.replace('www.', '');

    // open a new tab
    const page = await browser.newPage();

    // navigate to the page
    await page.goto(e.url);

    // take a screenshot
    const buffer = await page.screenshot()

    // upload the image using the current timestamp as filename
    const result = await s3
      .upload({
        Bucket: process.env.S3_BUCKET,
        Key: domain + `.png`,
        Body: buffer,
        ContentType: "image/png",
        ACL: "public-read"
      })
      .promise();
      
      await page.close();
      
      s3_urls.push({ url: result.Location });
      
  }
  
  await browser.close();

  // return the uploaded image url
  return s3_urls;
};         

[
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/gavurin.com.png","url":"https://gavurin.com"},
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/google.com.png","url":"https://google.com"},
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/amazon.com","url":"https://www.amazon.com"},  
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/stackoverflow.com","url":"https://stackoverflow.com"},
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/duckduckgo.com","url":"https://duckduckgo.com"},
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/docs.aws.amazon.com","url":"https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-features.html"},  
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/github.com","url":"https://github.com"},  
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/github.com/shelfio/chrome-aws-lambda-layer","url":"https://github.com/shelfio/chrome-aws-lambda-layer"},  
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/gwww.youtube.com","url":"https://www.youtube.com"},   
    {"img":"https://s3screenshotbucket-useast1v5.s3.amazonaws.com/w3docs.com","url":"https://www.w3docs.com"}       
]