COURSE AUTHOR –
Rahul Mula
1. Define the Steps Involved in Web Scraping and Creating Web Crawlers
2. Install and Setup Scrapy in Windows, Mac OS, Ubuntu (Linux) & Anaconda Environments
3. Send Request to a URL to Scrape Websites Using Scrapy Spider
4. Get the HTML Response From URL and Parse it for Web Scraping
5. Select Desired Data From Websites Using Scrapy Selector, CSS Selectors & XPath
6. Scrapy Crawl Spiders to Get Data From Websites And Extract it to JSON, CSV, XLSX ( Excel ) and XML Files
7. Use Scrapy Shell Commands to Test & Verify CSS Selectors or XPath
8. Export and Save Scraped Data to Online Databases Like MonogoDB Using Scrapy Item Pipelines
9. Define Scrapy Items to Organize Scraped Data And Load Items Using Scrapy Itemloaders with Input & Output Processors
10. Scrape Data From Multiple Web Pages Using Scrapy Pagination And Extract Data From HTML Tables
11. Login Into Websites Using Scrapy FormRequest With CSRF Tokens
12. Scrape Dynamic/JavaScript Rendered Websites Using Scrapy-Playwright And Interact With Web Elements, Take Screenshot of Websites or Save as PDF
13. Identify API Calls From a Website and Scrape Data From API Using Scrapy Request