eBay Product Scraping, Manta Data Scraping, Website Screen Scraping, Website Screen Scraping, Website Scraper, Scraping Data from Websites, Website Information Scraping, Web Scraping Services, Scraping Data from Websites, Website Information Scraping

Wednesday, 3 August 2016

Three Common Methods For Web Data Extraction

Three Common Methods For Web Data Extraction

Probably the most common technique used traditionally to extract data from web pages this is to cook up some regular expressions that match the pieces you want (e.g., URL's and link titles). Our screen-scraper software actually started out as an application written in Perl for this very reason. In addition to regular expressions, you might also use some code written in something like Java or Active Server Pages to parse out larger chunks of text. Using raw regular expressions to pull out the data can be a little intimidating to the uninitiated, and can get a bit messy when a script contains a lot of them. At the same time, if you're already familiar with regular expressions, and your scraping project is relatively small, they can be a great solution.

Other techniques for getting the data out can get very sophisticated as algorithms that make use of artificial intelligence and such are applied to the page. Some programs will actually analyze the semantic content of an HTML page, then intelligently pull out the pieces that are of interest. Still other approaches deal with developing "ontologies", or hierarchical vocabularies intended to represent the content domain.

There are a number of companies (including our own) that offer commercial applications specifically intended to do screen-scraping. The applications vary quite a bit, but for medium to large-sized projects they're often a good solution. Each one will have its own learning curve, so you should plan on taking time to learn the ins and outs of a new application. Especially if you plan on doing a fair amount of screen-scraping it's probably a good idea to at least shop around for a screen-scraping application, as it will likely save you time and money in the long run.

So what's the best approach to data extraction? It really depends on what your needs are, and what resources you have at your disposal. Here are some of the pros and cons of the various approaches, as well as suggestions on when you might use each one:

Raw regular expressions and code

Advantages:

- If you're already familiar with regular expressions and at least one programming language, this can be a quick solution.

- Regular expressions allow for a fair amount of "fuzziness" in the matching such that minor changes to the content won't break them.

- You likely don't need to learn any new languages or tools (again, assuming you're already familiar with regular expressions and a programming language).

- Regular expressions are supported in almost all modern programming languages. Heck, even VBScript has a regular expression engine. It's also nice because the various regular expression implementations don't vary too significantly in their syntax.

Disadvantages:

- They can be complex for those that don't have a lot of experience with them. Learning regular expressions isn't like going from Perl to Java. It's more like going from Perl to XSLT, where you have to wrap your mind around a completely different way of viewing the problem.

- They're often confusing to analyze. Take a look through some of the regular expressions people have created to match something as simple as an email address and you'll see what I mean.

- If the content you're trying to match changes (e.g., they change the web page by adding a new "font" tag) you'll likely need to update your regular expressions to account for the change.

- The data discovery portion of the process (traversing various web pages to get to the page containing the data you want) will still need to be handled, and can get fairly complex if you need to deal with cookies and such.

When to use this approach: You'll most likely use straight regular expressions in screen-scraping when you have a small job you want to get done quickly. Especially if you already know regular expressions, there's no sense in getting into other tools if all you need to do is pull some news headlines off of a site.

Ontologies and artificial intelligence

Advantages:

- You create it once and it can more or less extract the data from any page within the content domain you're targeting.

- The data model is generally built in. For example, if you're extracting data about cars from web sites the extraction engine already knows what the make, model, and price are, so it can easily map them to existing data structures (e.g., insert the data into the correct locations in your database).

- There is relatively little long-term maintenance required. As web sites change you likely will need to do very little to your extraction engine in order to account for the changes.

Disadvantages:

- It's relatively complex to create and work with such an engine. The level of expertise required to even understand an extraction engine that uses artificial intelligence and ontologies is much higher than what is required to deal with regular expressions.

- These types of engines are expensive to build. There are commercial offerings that will give you the basis for doing this type of data extraction, but you still need to configure them to work with the specific content domain you're targeting.

- You still have to deal with the data discovery portion of the process, which may not fit as well with this approach (meaning you may have to create an entirely separate engine to handle data discovery). Data discovery is the process of crawling web sites such that you arrive at the pages where you want to extract data.

When to use this approach: Typically you'll only get into ontologies and artificial intelligence when you're planning on extracting information from a very large number of sources. It also makes sense to do this when the data you're trying to extract is in a very unstructured format (e.g., newspaper classified ads). In cases where the data is very structured (meaning there are clear labels identifying the various data fields), it may make more sense to go with regular expressions or a screen-scraping application.

Screen-scraping software

Advantages:

- Abstracts most of the complicated stuff away. You can do some pretty sophisticated things in most screen-scraping applications without knowing anything about regular expressions, HTTP, or cookies.

- Dramatically reduces the amount of time required to set up a site to be scraped. Once you learn a particular screen-scraping application the amount of time it requires to scrape sites vs. other methods is significantly lowered.

- Support from a commercial company. If you run into trouble while using a commercial screen-scraping application, chances are there are support forums and help lines where you can get assistance.

Disadvantages:

- The learning curve. Each screen-scraping application has its own way of going about things. This may imply learning a new scripting language in addition to familiarizing yourself with how the core application works.

- A potential cost. Most ready-to-go screen-scraping applications are commercial, so you'll likely be paying in dollars as well as time for this solution.

- A proprietary approach. Any time you use a proprietary application to solve a computing problem (and proprietary is obviously a matter of degree) you're locking yourself into using that approach. This may or may not be a big deal, but you should at least consider how well the application you're using will integrate with other software applications you currently have. For example, once the screen-scraping application has extracted the data how easy is it for you to get to that data from your own code?

When to use this approach: Screen-scraping applications vary widely in their ease-of-use, price, and suitability to tackle a broad range of scenarios. Chances are, though, that if you don't mind paying a bit, you can save yourself a significant amount of time by using one. If you're doing a quick scrape of a single page you can use just about any language with regular expressions. If you want to extract data from hundreds of web sites that are all formatted differently you're probably better off investing in a complex system that uses ontologies and/or artificial intelligence. For just about everything else, though, you may want to consider investing in an application specifically designed for screen-scraping.

As an aside, I thought I should also mention a recent project we've been involved with that has actually required a hybrid approach of two of the aforementioned methods. We're currently working on a project that deals with extracting newspaper classified ads. The data in classifieds is about as unstructured as you can get. For example, in a real estate ad the term "number of bedrooms" can be written about 25 different ways. The data extraction portion of the process is one that lends itself well to an ontologies-based approach, which is what we've done. However, we still had to handle the data discovery portion. We decided to use screen-scraper for that, and it's handling it just great. The basic process is that screen-scraper traverses the various pages of the site, pulling out raw chunks of data that constitute the classified ads. These ads then get passed to code we've written that uses ontologies in order to extract out the individual pieces we're after. Once the data has been extracted we then insert it into a database.

Source: http://ezinearticles.com/?Three-Common-Methods-For-Web-Data-Extraction&id=165416

Friday, 29 July 2016

Tips for scraping business directories

Tips for scraping business directories

Are you looking to scrape business directories to generate leads?

Here are a few tips for scraping business directories.

Web scraping is not rocket science. But there are good and bad and worst ways of doing it.

Generating sales qualified leads is always a headache. The old school ways are to buy a list from sites like Data.com. But they are quite expensive.

Scraping business directories can help generate sales qualified leads. The following tips can help you scrape data from business directories efficiently.

1) Choose a good framework to write the web scrapers. This can help save a lot of time and trouble. Python Scrapy is our favourite, but there are other non-pythonic frameworks too.

2) The business directories might be having anti-scraping mechanisms. You have to use IP rotating services to do the scrape. Using IP rotating services, crawl with multiple changing IP addresses which can cover your tracks.

3) Some sites really don’t want you to scrape and they will block the bot. In these cases, you may need to disguise your web scraper as a human being. Browser automation tools like selenium can help you do this.

4) Web sites will update their data quite often. The scraper bot should be able to update the data according to the changes. This is a hard task and you need professional services to do that.

One of the easiest ways to generate leads is to scrape from business directories and use enrich them. We made Leadintel for lead research and enrichment.

Source: http://blog.datahut.co/tips-for-scraping-business-directories/

Monday, 11 July 2016

Web Scraping Best Practices

Extracting data from the World Wide Web has several challenges as more webmasters are working day and night to lower cases of scraping and crawling of their data in order to survive in the competitive world. There are various other problems you may face when web scraping and most of them can be avoided by adapting and implementing certain web scraping best practices as discussed in this article.

Have knowledge of the scraping tools

Acquiring adequate knowledge of hurdles that may be encountered during web scraping, you will be able to have a smooth web scraping experience and be on the safe side of the law. Conduct a thorough research on the types of tools you will use for scraping and crawling. Firsthand knowledge on these tools will help you find the data you need without being blocked.

Proper proxy software that acts as the middle party works well when you know how to work around HTTP and HTML protocols. Use tools that can change crawling patterns, URLs and data retrieved even when you are crawling on one domain. This will help you abide to the rules and regulations that come with web scraping activities and escaping any legal issues.

Conduct your scraping activities during off-peak hours

You may opt to extract data during times that less people have access for instance over the weekends, during late night hours, public holidays among others. Visiting a website on several instances to retrieve the same type of data is a waste of bandwidth. It is always advisable to download the entire site content to your computer and thereafter you can access it whenever need arises.

Hide your scrapping activities

There is a thin line between ethical and unethical crawling hence you should completely evade being on the top user list of a particular website. Cover up your track as best as you can by making use of proxy IPs to avoid any legal problems. You may also use multiple IP addresses or VPN services to conceal your scrapping activities and lower chances of landing on a website’s blacklist.

Website owners today are very protective of their data and any other information existing under their unique url. Be keen when going through the terms and conditions indicated by websites as they may consider crawling as an infringement of their privacy. Simple etiquette goes a long way. Your web scraping efforts will be fruitful if the site owner supports the idea of sharing data.

Keep record of your activities

Web scraping involves large amount of data.Due to this you may not always remember each and every piece of information you have acquired, gathering statistics will help you monitor your activities.

Load data in phases

Web scraping demands a lot of patience from you when using the crawlers to get needed information. Take the process in a slow manner by loading data one piece at a time. Several parallel request to the same domain can crush the entire site or retrace the scrapping attempts back to your local machine.

Loading data small bits will save you the hustle of scrapping afresh in case that your activity has been interrupted because you will have already stored part of the data required. You can reduce the loading data on an individual domain through various techniques such as caching pages that you have scrapped to escape redundancy occurrences. Use auto throttling mechanisms to increase the amount of traffic to the website and pause for breaks between requests to prevent getting banned.

Conclusion

Through these few mentioned web scraping best practices you will be able to work around website and gather the data required as per clients’ request without major hurdles along the way. The ultimate goal of every web scraper is to be able to access vital information and at the same time remain on the good side of the law.

Source URl : http://nocodewebscraping.com/web-scraping-best-practices/

Saturday, 9 July 2016

Web Data Scraping: Practical Uses

Whether in the form of media, text or data in diverse other formats—the internet serves to be a huge storehouse of the world’s information. While browsing for commercial or business needs alike, users are exposed to numerous web pages that contain data in just about every form. Even though access to such data is extremely critical for garnering success in the contemporary world, unfortunately most of it is not open. More often than not, business websites restrict the accessibility options to such data and do not allow visitors to save or display them for reuse on their local storage devices, or onto their own websites.  This is where web data extraction tools come in handy.

Read on for a closer look into some of the common areas of data scraping usage.

• Gathering of data from diverse sources for analysis: In case a business necessitates the collection and analysis of data specific to certain categories from multiple websites, then it helps refer to web data integration experts or those related to the field of data scraping linked with categories like industrial equipment, real estate, automobiles, marketing, business contacts, electronic gadgets and so forth.

• Collection of data in different formats: Different websites are known to publish information and structured data in different formats. So, it may not be possible for organizations to see all the required data a one place, at any given time. Data scrapers allow the extraction of information spanning across multiple pages under various sections, on to a single database or spreadsheet.  This makes it easy for users to analyze (or visualize) the data.

• Helps Research: Data is an important and integral part of all kinds of research – marketing, academic or scientific. A data scraper helps in gathering structured data with ease.

• Market analysis for businesses: Companies that cater to products or services connected to specific domains require comprehensive data of products and services that are of similar kind, and which have a tendency of appearing in the market on a daily basis.

Web scraping software solutions from reputed companies are successful in keeping a constant watch on this kind of data and allow users to get access required information from diverse sources – all at the click of a button.
Go for data extraction to take your business to the next levels of success – you will not be disappointed.

Source URL : http://www.3idatascraping.com/web-data-scraping-practical-uses.php

Thursday, 7 July 2016

Web Scraping Services : Making Modern File Formats More Accessible

Data scraping is the process of automatically sorting through information contained on the internet inside html, PDF or other documents and collecting relevant information to into databases and spreadsheets for later retrieval. On most websites, the text is easily and accessibly written in the source code but an increasing number of businesses are using Adobe PDF format (Portable Document Format: A format which can be viewed by the free Adobe Acrobat software on almost any operating system. See below for a link.). The advantage of PDF format is that the document looks exactly the same no matter which computer you view it from making it ideal for business forms, specification sheets, etc.; the disadvantage is that the text is converted into an image from which you often cannot easily copy and paste. PDF Scraping is the process of data scraping information contained in PDF files. To PDF scrape a PDF document, you must employ a more diverse set of tools.

There are two main types of PDF files: those built from a text file and those built from an image (likely scanned in). Adobe's own software is capable of PDF scraping from text-based PDF files but special tools are needed for PDF scraping text from image-based PDF files. The primary tool for PDF scraping is the OCR program. OCR, or Optical Character Recognition, programs scan a document for small pictures that they can separate into letters. These pictures are then compared to actual letters and if matches are found, the letters are copied into a file. OCR programs can perform PDF scraping of image-based PDF files quite accurately but they are not perfect.

Once the OCR program or Adobe program has finished PDF scraping a document, you can search through the data to find the parts you are most interested in. This information can then be stored into your favorite database or spreadsheet program. Some PDF scraping programs can sort the data into databases and/or spreadsheets automatically making your job that much easier.

Quite often you will not find a PDF scraping program that will obtain exactly the data you want without customization. Surprisingly a search on Google only turned up one business, that will create a customized PDF scraping utility for your project. A handful of off the shelf utilities claim to be customizable, but seem to require a bit of programming knowledge and time commitment to use effectively. Obtaining the data yourself with one of these tools may be possible but will likely prove quite tedious and time consuming. It may be advisable to contract a company that specializes in PDF scraping to do it for you quickly and professionally.

Let's explore some real world examples of the uses of PDF scraping technology. A group at Cornell University wanted to improve a database of technical documents in PDF format by taking the old PDF file where the links and references were just images of text and changing the links and references into working clickable links thus making the database easy to navigate and cross-reference. They employed a PDF scraping utility to deconstruct the PDF files and figure out where the links were. They then could create a simple script to re-create the PDF files with working links replacing the old text image.

A computer hardware vendor wanted to display specifications data for his hardware on his website. He hired a company to perform PDF scraping of the hardware documentation on the manufacturers' website and save the PDF scraped data into a database he could use to update his webpage automatically.

PDF Scraping is just collecting information that is available on the public internet. PDF Scraping does not violate copyright laws.

PDF Scraping is a great new technology that can significantly reduce your workload if it involves retrieving information from PDF files. Applications exist that can help you with smaller, easier PDF Scraping projects but companies exist that will create custom applications for larger or more intricate PDF Scraping jobs.

Source URL :  http://yellowpagesdatascraping.blogspot.in/2015/06/web-scraping-services-making-modern.html

Saturday, 18 June 2016

Increasing Accessibility by Scraping Information From PDF

You may have heard about data scraping which is a method that is being used by computer programs in extracting data from an output that comes from another program. To put it simply, this is a process which involves the automatic sorting of information that can be found on different resources including the internet which is inside an html file, PDF or any other documents. In addition to that, there is the collection of pertinent information. These pieces of information will be contained into the databases or spreadsheets so that the users can retrieve them later.

Most of the websites today have text that can be accessed and written easily in the source code. However, there are now other businesses nowadays that choose to make use of Adobe PDF files or Portable Document Format. This is a type of file that can be viewed by simply using the free software known as the Adobe Acrobat. Almost any operating system supports the said software. There are many advantages when you choose to utilize PDF files. Among them is that the document that you have looks exactly the same even if you put it in another computer so that you can view it. Therefore, this makes it ideal for business documents or even specification sheets. Of course there are disadvantages as well. One of which is that the text that is contained in the file is converted into an image. In this case, it is often that you may have problems with this when it comes to the copying and pasting.

This is why there are some that start scraping information from PDF. This is often called PDF scraping in which this is the process that is just like data scraping only that you will be getting information that is contained in your PDF files. In order for you to begin scraping information from PDF, you must choose and exploit a tool that is specifically designed for this process. However, you will find that it is not easy to locate the right tool that will enable you to perform PDF scraping effectively. This is because most of the tools today have problems in obtaining exactly the same data that you want without personalizing them.

Nevertheless, if you search well enough, you will be able to encounter the program that you are looking for. There is no need for you to have programming language knowledge in order for you to use them. You can easily specify your own preferences and the software will do the rest of the work for you. There are also companies out there that you can contact and they will perform the task since they have the right tools that they can use. If you choose to do things manually, you will find that this is indeed tedious and complicated whereas if you compare this to having professionals do the job for you, they will be able to finish it in no time at all. Scraping information from PDF is a process where you collect the information that can be found on the internet and this does not infringe copyright laws.

 Source  URL : http://ezinearticles.com/?Increasing-Accessibility-by-Scraping-Information-From-PDF&id=4593863

Wednesday, 11 May 2016

Web Scraping to Create Open Data

Open data is the idea that some data should be freely available to everyone to use and republish as they wish, without restrictions from
copyright, patents or other mechanisms of control.

My first experience with open data was in the year 2010. I wanted to create a better app for Bicing, the local bike sharing system in
Barcelona. Their website was a nightmare to use and I was tired of needing to walk to each station, trying to guess which ones had bicycles.
There was no app for Android, other than a couple of unofficial attempts that didn’t work at all.

I began as most would; I searched the internet and found a library named python-bicing that was somehow able to retrieve station and
bike information. This was my first time using Python and, after some investigation, I learned what the code was doing: accessing the
official website, parsing the JavaScript that generated their buggy map and giving back a nice chunk of Python objects that represented
bike share stations.

This I learned was called web scraping. It was like I had figured out a magic trick that would allow me to always be able to access the data I
needed without having to rely on faulty websites.

The rise of OpenBicing and CityBikes

Shortly after, I launched OpenBicing, an Android app for the local bike sharing system in Barcelona, together with a backend that used
python-bicing. I also shared a public API that provided this information so that nobody else had to do the dirty work ever again.

Since other cities were having the same problem, we expanded the scope of the project worldwide and renamed it CityBikes. That was 6
years ago.

To date, CityBikes is the most comprehensive and widely used open API for bike sharing information, with support for over 400 cities
worldwide. Our API processes around 10 requests per second and we scrape each of the 418 feeds about every three minutes. Making our
core library available for anyone to contribute has been crucial in maintaining and adding coverage for all of the supported systems.

The open data fallacy

We are usually regarded as “an open data project” even though less than 10% of our feeds come from properly licensed, documented and
machine-readable feeds. The remaining 90% is composed of 188 feeds that are machine-readable, but not licensed nor documented and
230 that are entirely maintained by scraping HTML pages.

North American BikeShare Association) recently published GBFS (General Bikeshare Feed Specification). This is clearly a step in the right
direction, but I can’t help but look at the almost 60% of services we currently support through scraping and wonder how long it will take the
remaining organizations to release their information, if ever. This is even more the case considering these numbers aren’t even taking into
account worldwide coverage.

Over the last few years there has been a progression by transportation companies and city councils toward providing their information as
“open data”. Directive 2003/98/EC encourages EU member states to release information regarding public services.

Yet, in most cases, there’s little action in enforcing Public Private Partnerships (PPP) to release their public information under a non-
restrictive license or even to transfer ownership of the data to city councils to be included in their open data portals.

Even with the increasing number of companies and institutions interested in participating in open data, by no means should we consider
open data a reality or something to be taken for granted. I firmly believe in the future and benefits of open data, I have seen them
happening all around CityBikes, but as technologists we need to stress the fact that the data is not out there yet.

The benefits of open data

When I started this project, I sought to make a difference in Barcelona. Now you can find tons of bike sharing apps that use our API on all
major platforms. It doesn’t matter that these are not our own apps. They are solving the same problem we were trying to fix, so their
success is our success.

Besides popular apps like Moovit or CityMapper, there are many neat projects out there, some of which are published under free software
licenses. Ideally, a city council could create a customization of any of these apps for their own use.

Most official applications for bike sharing systems have terrible ratings. The core business of transportation companies is running a service,

so they have no real motivation to create an engaging UI or innovate further. In some cases, the city council does not even own the rights to
the data, being completely at the mercy of the company providing the transportation service.

Open data over apps

When providing public services, city councils and companies often get lost in what they should offer as an aid to the service. They focus on
a nice map or a flashy application, rather than providing the data behind these service aids. Maps, apps, and websites have a limited focus
and usually serve a single purpose. On the other hand, data is malleable and the purest form of representation. While you can’t create
something new from looking and playing with a static map (except, of course, if you scrape it), data can be used to create countless
different iterations. It can even provide a bridge that will allow anyone to participate, improve and build on top of these public services.

Wrap Up

At this point, you might wonder why I care so much about bike sharing. To me it’s not about bike sharing anymore. CityBikes is just too
good of an open data metaphor, a simulation in which public information is freely accessible to everyone. It shows the benefits of open
data and the deficiencies that arise from the lack thereof.

We shouldn’t have to create open data by scraping websites. This information should be already available, easily accessed and provided in
a machine-readable format from the original providers, be they city councils or transportation companies. However, until there’s another
option, we’ll always have scraping.


Source : https://blog.scrapinghub.com/2016/03/30/web-scraping-to-create-open-data/