- What Is Web Scraping
- Web Scraping Html Tables With Python
- Web Scraping Applications
- Web Scraping Online
You probably know how to use basic functions in Excel. It’s easy to do things like sorting, applying filters, making charts, and outlining data with Excel. You even can perform advanced data analysis using pivot and regression models. It becomes an easy job when the live data turns into a structured format. The problem is, how can we extract scalable data and put it into Excel? This can be tedious if you doing it manually by typing, searching, copying and pasting repetitively. Instead, you can achieve automated data scraping from websites to excel.
In this article, I will introduce several ways to save your time and energy to scrape web data into Excel.
Disclaimer:
Feb 12, 2020 If you will try open this website (in your browser — you will see a simple page with some content. However, if you will try to send HTTP GET request to the same url in the Postman — you will see a different response. Jun 22, 2020 What is Web Scraping? Web Scripting is an automatic method to obtain large amounts of data from websites. Most of this data is unstructured data in an HTML format which is then converted into structured data in a spreadsheet or a database so that it can be used in various applications.
There many other ways to scrape from websites using programming languages like PHP, Python, Perl, Ruby and etc. Here we just talk about how to scrape data from websites into excel for non-coders.
Getting web data using Excel Web Queries
Except for transforming data from a web page manually by copying and pasting, Excel Web Queries is used to quickly retrieve data from a standard web page into an Excel worksheet. It can automatically detect tables embedded in the web page's HTML. Excel Web queries can also be used in situations where a standard ODBC(Open Database Connectivity) connection gets hard to create or maintain. You can directly scrape a table from any website using Excel Web Queries.
The process boils down to several simple steps (Check out this article):
1. Go to Data > Get External Data > From Web
2. A browser window named “New Web Query” will appear
3. In the address bar, write the web address
(picture from excel-university.com)
4. The page will load and will show yellow icons against data/tables.
5. Select the appropriate one
6. Press the Import button.
Now you have the web data scraped into the Excel Worksheet - perfectly arranged in rows and columns as you like.
Getting web data using Excel VBA
Most of us would use formula's in Excel(e.g. =avg(..), =sum(..), =if(..), etc.) a lot, but less familiar with the built-in language - Visual Basic for Application a.k.a VBA. It’s commonly known as “Macros” and such Excel files are saved as a **.xlsm. Before using it, you need to first enable the Developer tab in the ribbon (right click File -> Customize Ribbon -> check Developer tab). Then set up your layout. In this developer interface, you can write VBA code attached to various events. Click HERE (https://msdn.microsoft.com/en-us/library/office/ee814737(v=office.14).aspx) to getting started with VBA in excel 2010.
Using Excel VBA is going to be a bit technical - this is not very friendly for non-programmers among us. VBA works by running macros, step-by-step procedures written in Excel Visual Basic. To scrape data from websites to Excel using VBA, we need to build or get some VBA script to send some requests to web pages and get returned data from these web pages. It’s common to use VBA with XMLHTTP and regular expressions to parse the web pages. For Windows, you can use VBA with WinHTTP or InternetExplorer to scrape data from websites to Excel.
With some patience and some practice, you would find it worthwhile to learn some Excel VBA code and some HTML knowledge to make your web scraping into Excel much easier and more efficient for automating the repetitive work. There’s a plentiful amount of material and forums for you to learn how to write VBA code.
Automated Web Scraping Tools
For someone who is looking for a quick tool to scrape data off pages to Excel and doesn’t want to set up the VBA code yourself, I strongly recommend automated web scraping tools like Octoparse to scrape data for your Excel Worksheet directly or via API. There is no need to learn to program. You can pick one of those web scraping freeware from the list, and get started with extracting data from websites immediately and exporting the scraped data into Excel. Different web scraping tool has its pros and cons and you can choose the perfect one to fit your needs. The below video shows how to leverage an automated web scraping tool to extract web data to excel efficiently.
Check out this post and try out these TOP 30 free web scraping tools
Outsource Your Web Scraping Project
If time is your most valuable asset and you want to focus on your core businesses, outsourcing such complicated web scraping work to a proficient web scraping team that has experience and expertise would be the best option. It’s difficult to scrape data from websites due to the fact that the presence of anti-scraping bots will restrain the practice of web scraping. A proficient web scraping team would help you get data from websites in a proper way and deliver structured data to you in an Excel sheet, or in any format you need.
Read Latest Customer Stories: How Web Scraping Helps Business of All Sizes
日本語記事:Webデータを活用!WebサイトからデータをExcelに取り込む方法
Webスクレイピングについての記事は 公式サイトでも読むことができます。
Artículo en español: Scraping de Datos del Sitio Web a Excel
También puede leer artículos de web scraping en el Website Oficial
Running hobby projects is the best way to practice data science before getting your first job. And one of the best ways to get real data for a hobby project is: web scraping.
I’ve been promising this for a long time to my course participants – so here it is: my web scraping tutorial series for aspiring data scientists!
I’ll show you step by step how you can:
- scrape a public html webpage
- extract the data from it
- write a script that automatically scrapes thousands of public html webpages on a website
- create useful (and fun) analyses from the data you get
- analyze a huge amount of text
- analyze website metadata
This is episode #1, where we will focus on step #1 (scraping a public html webpage). And in the upcoming episodes we will continue with step #2, #3, #4, #5 and #6 (scaling this up to thousands of webpages, extracting the data from them and analyzing the data we get).
Even more, I’ll show you the whole process in two different data languages, too, so you will see the full scope. In this article, I will start with the simpler one: bash. And in future articles, I’ll show you how to do similar things in Python, as well.
So buckle up! Web scraping tutorial episode #1 — here we go!
Before we start…
This is a hands-on tutorial. I highly recommend doing the coding part with me (and doing the exercises at the end of the articles).
I presume that you have some bash coding knowledge already — and that you have your own data server set up already. If not, please go through these tutorials first:
The project: scraping TED.com and analyze talks
When you run a data science hobby project, you should always pick a topic that you are passionate about.
As for me: I love public speaking.
I like to practice it myself and I like to listen to others… So watching TED presentations is also a hobby for me. Thus I’ll go ahead and analyze TED presentations in this tutorial.
Luckily, almost if not all TED presentations are already available online.
…
Even more, their transcripts are available, too!
…
(Thank you TED!)
So we’ll “just” have to write a bash script that collects all those transcripts for us and we can start our in-depth text analysis.
Note 1: I picked scraping TED.com just for the sake of example. If you are passionate about something else, after finishing these tutorial articles, try to find a web scraping project that resonates with you! Are you into finance? Try to scrape stock market news! Are you into real estate? Then scrape real estate websites! Are you a movie person? Your target could be imdb.com (or something similar)!
Login to your remote server!
Okay, so one more time: for this tutorial, you’ll need a remote server. If you haven’t set one up yet, now is the time! 🙂
Note: If — for some reason — you don’t like my server setup, you can use a different environment. But I strongly advise against it. Using the exact same tools that I use will guarantee that everything you read here will work on your end, too. So one last time: use this remote server setup for this tutorial.
Okay, let’s say that you have your server. Great!
Now open Terminal (or Putty) and log in with your username and IP address.
If everything’s right, you should see the command line… Something like this:
Introducing your new favorite command line tool: curl
Interestingly enough, in this whole web scraping tutorial, you will have to learn only one new bash command. And that’s curl
.
curl
is a great tool to access a website’s whole html code from the command line. (It’s good for many other server-to-server data transfer processes — but we won’t go there for now.)
Let’s try it out right away!
curl https://www.ted.com/
The result is:
Oh boy… What’s this mess??
It’s the full html code of TED.com — and soon enough, I’ll explain how to turn this into something more meaningful. But before that, something important.
As you can see, to get the data you want, you’ll have to use that exact URL where the website content is located — and the full version of it. So, for instance this short form won’t work:
curl ted.com
It just doesn’t return anything:
And you’ll get similar empty results for these:
curl www.ted.com
curl http://www.ted.com
Even when you define the https
protocol properly but you miss the www
part, you’ll get a short error message that your website content “has been moved”:
So make sure that you type the full URL and you use the one where the website is actually located. This, of course, differs from website to website. Some use the www
prefix, some don’t. Some still operate under the http
protocol — most (luckily) use https
.
A good trick to find the URL you need is to open the website in your browser — and then simply copy-paste the full URL from there into your Terminal window:
So in TED’s case, it will be this:
curl https://www.ted.com
But, as I said, I don’t want to scrape the TED.com home page.
I want to scrape the transcripts of the talks. So let’s see how to do that!
curl in action: downloading one TED talk’s transcript
Obviously, the first step in a web scraping project is always to find the right URL for the webpage that you want to download, extract and analyze.
For now, let’s start with one single TED talk. (Then in the next episode of this tutorial, we’ll scale this up to all 3,300 talks.)
For prototyping my script, I chose the most viewed speech — which is Sir Ken Robinson’s “Do schools kill creativity.” (Excellent talk, by the way, I highly recommend watching it!)
The transcript itself is found under this URL:
So you’ll have to copy-paste this to the command line — right after the curl
command:
Great!
We got our messy html code again — but this actually means that we are one step closer.
If you scroll up a bit in your Terminal window, you’ll recognize parts of the speech there:
This is not (yet) the most
And to remove all the lines after a given line in a file, this is the code:
sed -n '/[the pattern itself]/q;p'
Note: this one will remove the line with the pattern, too! Free adobe downloads for mac.
Side note:
Now, of course, if you don’t know sed
inside out, you couldn’t figure out these code snippets by yourself. But here’s the thing: you don’t have to, either!
If you build up your data science knowledge by practicing, it’s okay to use Google and Stackoverflow and find answers to your questions online. Well, it’s not just okay, you have to do so!
E.g. if you don’t know how to remove lines after a given line in bash, type this into Google:
The first result brings you to Stackoverflow — where right in the first answer there are three(!) alternative solutions for the problem:
Who says learning data science by self-teaching is hard nowadays?
Okay, pep-talk is over, let’s get back to our web scraping tutorial!
Let’s replace the [the pattern itself]
parts in your sed
commands with the patterns we’ve found above — and then add them to your command using pipes.
Something like this:
Note 1: I used line breaks in my command… but only to make my code nicer. Using line breaks is optional in this case.
Note 2: In the Programs &. initiatives
line, I didn’t add the *
characters to the pattern in sed
because the line (and the pattern) is fairly unique without them already. If you want to add them, you can. But you’ll have to know that *
is a special character in sed
, so to refer to it as a character in your text, you’ll have to “escape” it with a backslash first. The code would look like this: sed -n '/**** Programs &. initiatives ****/q;p'
Again, this won’t be needed anyway.
Let’s run the command and check the results!
What Is Web Scraping
If you scroll up, you’ll see these at the beginning of the returned text (without the annotations, of course):
Before you start to worry about all the chaos in the first few lines…
- The part that I annotated with the yellow frame: that’s your code. (And of course, it’s not a part of the returned results.)
- The part with the blue frame: that’s only a “status bar” that shows how fast the web scraping process was — it’s on your screen but it won’t be part of the downloaded webpage content, either. (You’ll see this clearly soon, when we save our results into a file!)
However, the one with the red frame (Details about the talk
) is something that is part of the downloaded webpage content… and you won’t need it. It’s just left there, so we will remove it soon.
But first, scroll back down!
At the end of the file the situation is cleaner:
We only have one unnecessary line left there that says TED
.
So you are almost there…
Removing first and last lines with the head and tail commands
And as a final step, remove the first (Details About the talk
) and the last (TED
) lines of the content you currently see in Terminal! These two lines are not part of the original talk… and you don’t want them in your analyses. No sleep for mac os x.
For this little modification, let’s use the head
and tail
commands (I wrote about them here).
To remove the last line, add this code: head -n-1
And to remove the first line, add this code: tail -n+2
And with that, this will be your final code:
You can try it out…
But I recommend to save this into a file first, so you will be able to reuse the data you got in the future.
If you print this freshly created proto_text.csv
file to your screen, you’ll see that you have beautifully downloaded, cleaned and stored the transcript of Sir Ken Robinson’s TED talk:
cat proto_text.csv
And with that you’ve finished the first episode of this web scraping tutorial!
Web Scraping Html Tables With Python
Nice!
Exercise – your own web scraping mini-project
Now that you have seen how a simple web scraping task is done, I encourage you to try this out yourself.
Pick a simple public .html webpage from the internet — anything that interests you — and do the same steps that we have done above:
Web Scraping Applications
- Download the .html site with
curl
! - Extract the text with
html2text
! - Clean the data with
sed
,head
,tail
,grep
or anything else you need!
The third step could be especially challenging. There are many, many types of data cleaning issues… But hey, after all, this is what a data science hobby project is for: solving problems and challenges! So go for it, pick a webpage and scrape it! 😉 And if you get stuck, don’t be afraid to go to Google or Stackoverflow for help!
Note 1: Some big (or often-scraped) webpages block web scraping scripts. If so, you’ll get a “403 Forbidden” message returned to your curl
command. Please consider it as a “polite” request from those websites and try not to find a way around to scrape their website anyway. They don’t want it — so just go ahead and find another project.
Note 2: Also consider the legal aspect of web scraping. Generally speaking, if you use your script strictly for a hobby project, this probably won’t be an issue at all. (This is not official legal advice though.) But if it becomes more serious, just in case, to stay on the safe side, consult a lawyer, too!
Web Scraping Tutorial – summary and the next steps
Web Scraping Online
Scraping one webpage (or TED talk) is nice…
But boring! 😉
So in the next episode of this web scraping tutorial series, I’ll show you how to scale this up! You will write a bash script that – instead of one single talk – will scrape all 3,000+ talks on TED.com. Let’s continue here: web scraping tutorial, episode #2.
And in the later episodes, we will focus on analyzing the huge amount of text we collected. It’s going to be fun! So stay with me!
- If you want to learn more about how to become a data scientist, take my 50-minute video course: How to Become a Data Scientist. (It’s free!)
- Also check out my 6-week online course: The Junior Data Scientist’s First Month video course.
Cheers,
Tomi Mester