log( "CHILD: url received from parent process", url) Ĭonst browser = await puppeteer. The code snippet below is a simple example of running parallel downloads with Puppeteer.Ĭonst downloadPath = path. □ If you are not familiar with how child process work in Node I highly encourage you to give this article a read. We can combine the child process module with our Puppeteer script and download files in parallel. Child process is how Node.js handles parallel programming. We can fork multiple child_proces in Node. New pages are only opened when there is enough free CPU and memory. Our CPU cores can run multiple processes at the same time. Since PuppeteerCrawler uses headless Chrome to download web pages and extract data. □ Learn more about the single threaded architecture of node here Therefore if we have to download 10 files each 1 gigabyte in size and each requiring about 3 mins to download then with a single process we will have to wait for 10 x 3 = 30 minutes for the task to finish. ![]() It can only execute one process at a time. You see Node.js in its core is a single-threaded system. ![]() The browser is downloaded to the HOME/.cache/puppeteer folder by default. However, if you have to download multiple large files things start to get complicated. Puppeteer is a Node.js library which provides a high-level API to control. ![]() In this next part, we will dive deep into some of the advanced concepts.
0 Comments
Leave a Reply. |