Web Scraping and Crawling with Python
- Level
- Total time


"De training is goed gegeven. Wat een bijzonder goed punt was, was dat de inhoud gaandeweg werd bijgesteld op grond van het niveau en de behoeften van de groep.
Het was ook duidelijk dat Jeroen boven de stof staat, waardoor hij de problemen en kronkels in onze gedachten snel kon oplossen. Hij heeft het helder voor ogen." - 2020-12-15 11:44
"De training is goed gegeven. Wat een bijzonder goed punt was, was dat de inhoud gaandeweg werd bijgesteld op grond van het niveau en de beho… read full review - 2020-12-15 11:44
Starting dates and places
Data Science Workshops B.V. offers their products as a default in the following regions: 's-Hertogenbosch, Alkmaar, Almere / Lelystad, Alphen aan den Rijn, Amersfoort, Amsterdam, Antwerpen, Apeldoorn, Arnhem, Assen, Breda, Brugge, Brussel, Delft, Den Haag, Deventer, Dordrecht, Drachten, Ede, Eindhoven, Emmen, Enschede, Gent, Gouda, Groningen, Haarlem, Haarlemmermeer, Heerenveen, Hilversum, Leeuwarden, Leiden, Luik, Maastricht, Middelburg, Nijmegen, Roermond, Rotterdam, Terneuzen, Tilburg, Utrecht, Veenendaal, Venlo, Westland, Zaanstad, Zoetermeer, Zwolle
Description
Introduction
The internet is not just a collection of webpages, it's a gigantic resource of interesting data. Being able to extract that data is a valuable skill. It's certainly challenging, but with the right knowledge and tools, you'll be able to leverage a wealth of information for your personal and professional projects.
Imagine building a web scraper that legally gathers information about potential houses to buy, a process that automatically fills in that tedious form to download a report, or a crawler that enriches an existing data set with weather information. In this hands-on workshop we'll teach you how to accomplish just that using Python and a handful of packages.
You'll learn abo…
Frequently asked questions
There are no frequently asked questions yet. If you have any more questions or need help, contact our customer service.
Introduction
The internet is not just a collection of webpages, it's a gigantic resource of interesting data. Being able to extract that data is a valuable skill. It's certainly challenging, but with the right knowledge and tools, you'll be able to leverage a wealth of information for your personal and professional projects.
Imagine building a web scraper that legally gathers information about potential houses to buy, a process that automatically fills in that tedious form to download a report, or a crawler that enriches an existing data set with weather information. In this hands-on workshop we'll teach you how to accomplish just that using Python and a handful of packages.
You'll learn about the concepts underlying HTML, CSS selectors, and HTTP requests; and how to inspect those using the developer tools of your browser. We'll show you how to turn messy HTML into structured data sets, how to automate interacting with dynamic websites and forms, and how to set up crawlers that can traverse thousands or million of websites. Through plenty of exercises you'll be able to apply this new knowledge to your own projects in no time.
What you'll learn
- The challenge of scraping messy HTML
- The structure of GET and POST requests
- How to target HTML elements and attributes using CSS selectors
- The difference between a static and a dynamic website
- How to extract data from a dynamic website
- How to automate browser tasks such as clicking links and submitting forms
- How to use Python packages beautifulsoup4, pyquery, scrapy, and selenium
This workshop is for you because
- You want to extract data from a static or dynamic webpage (and potentially many websites)
- You want to transform messy HTML into a structured data set for your data visualisation or machine learning project
- You want to automate a task that requires logging in, filling in forms, or downloading files
Schedule
- Introduction to web scraping
- What's the challenge anyway?
- Common HTML elements and attributes
- Static vs dynamic web pages
- Working with Developer Tools in Firefox and Chrome
- Targeting elements using CSS Selectors
- Based on types, classes, and IDs
- Based on parents, ancestors, and siblings
- Based on attributes and pseudo-classes
- HTTP basics
- The structure of a GET request
- Query parameters
- Understanding status codes such as 200, 301, and 404
- Why use a POST request?
- From HTML to data
- Converting data types
- Extracting and combining multiple elements
- Transforming tables into CSV
- Traversing paginated results
- Working with badly formatted HTML
- Automated browsing
- Clicking links
- Filling in forms
- Logging in
- Uploading and downloading files
- Dynamic websites
- Introduction to Selenium
- Understanding headless browsing
- Scraping JavaScript
- Web crawling
- Setting up a crawler
- Traversing a single domain
- Crawling across the internet
- Scheduling crawl jobs
Prerequisites
You're expected to have some experience with programming in Python. Our workshop Introduction to Programming in Python is one option that can help you with that. Roughly speaking, if you're familiar with the following Python syntax and concepts, then you'll be fine:
- assignment, arithmetic, boolean expression, tuple unpacking
- bool, int, float, list, tuple, dict, str, type casting
- in operator, indexing, slicing
- if, elif, else, for, while
- range(), len(), zip()
- def, (keyword) arguments, default values
- import, import as, from import ...
- lambda functions, list comprehension
- JupyterLab or Jupyter Notebook
Some experience with HTML and CSS is useful, but not required.
Recommended preparation
We're going to use Python together with JupyterLab and the following packages:
- beautifulsoup4
- mechanize
- pyquery
- scrapy, and
- selenium
The recommended way to get everything set up is to:
- Download and install the Anaconda Distribution
- Run the command: ! conda install -y -c conda-forge beautifulsoup4 mechanize pyquery scrapy selenium in a Jupyter notebook
Alternatively, if you don't want to use Anaconda, then you can install everything using pip. In any case, if running import bs4, mechanize, pyquery, scrapy, selenium doesn't produce any errors then you know you've set up everything correctly.
In addition, you should have a recent version of either Firefox or Chrome because we're going to use their Developer Tools to inspect HTTP requests and HTML elements.
Clients
I’ve previously delivered this workshop at:
- Elsevier
- KPN
- ProRail
- Rabobank
Testimonials
"Jeroen came to our company to help us understand big data and the tools around it. He's clearly an expert in this field and we enjoyed the course, learnt a lot and one day, when we have more big data, we hope to team up again. I couldn't recommend them more highly."
--Jamie Dobson, CEO, Container Solutions
"De training is goed gegeven. Wat een bijzonder goed punt was, was dat de inhoud gaandeweg werd bijgesteld op grond van het niveau en de behoeften van de groep.
Het was ook duidelijk dat Jeroen boven de stof staat, waardoor hij de problemen en kronkels in onze gedachten snel kon oplossen. Hij heeft het helder voor ogen." - 2020-12-15 11:44
"De training is goed gegeven. Wat een bijzonder goed punt was, was dat de inhoud gaandeweg werd bijgesteld op grond van het niveau en de beho… read full review - 2020-12-15 11:44
"Jeroen wist het goed boeiend te houden voor een groep waarin de ervaringsniveaus met Python nogal uiteen liepen, wat me een lastige opgave lijkt. Uiteindelijk heeft iedereen een significante stap kunnen maken in zijn/haar skills met web scraping in Python. De stof van de cursus was door Jeroen op maat aangepast naar de behoeften die wij hadden. De voorbeelden die hij gebruikte paste goed bij de use cases die wij hadden." - 2020-11-09 10:08
"Jeroen wist het goed boeiend te houden voor een groep waarin de ervaringsniveaus met Python nogal uiteen liepen, wat me een lastige opgave l… read full review - 2020-11-09 10:08
There are no frequently asked questions yet. If you have any more questions or need help, contact our customer service.