Web Site Scraper for LLMs with Airtop
Last edited 39 days ago
Recursive Web Scraping
Use Case
Automating web scraping with recursive depth is ideal for collecting content across multiple linked pages—perfect for content aggregation, lead generation, or research projects.
What This Automation Does
This automation reads a list of URLs from a Google Sheet, scrapes each page, stores the content in a document, and adds newly discovered links back to the sheet. It continues this process for a specified number of iterations based on the defined scraping depth.
Input Parameters:
Seed URL
: The starting URL to begin the scraping process.
Example:https://example.com/
Links must contain
: Restricts the links to those that contain this specified string.
Example:https://example.com/
Depth
: The number of iterations (layers of links) to scrape beyond the initial set.
Example:3
How It Works
- Starts by reading the
Seed URL
from the Google Sheet. - Scrapes each page and saves its content to the specified document.
- Extracts new links from each page that match the
Links must contain
string, appends them to the Google Sheet. - Repeats steps 2–3 for the number of times specified by
Depth - 1
.
Setup Requirements
- Airtop API Key — free to generate.
- Credentials set up for Google Docs (requires creating a project on Google Console). Read how to.
- Credentials set up for Google Spreadsheet.
Next Steps
- Add Filtering Rules: Filter which links to follow based on domain, path, or content type.
- Combine with Scheduler: Run this automation on a schedule to continuously explore newly discovered pages.
- Export Structured Data: Extend the process to store extracted data in a CSV or database for analysis.
Read more about website scraping for LLMS
You may also like
New to n8n?
Need help building new n8n workflows? Process automation for you or your company will save you time and money, and it's completely free!