Tuesday, 30 December 2014

Web Data Scraping Services Have Various Method Of Business

Magnetic or optical data removal or Data Scraping Services is a term that refers to the elimination of digital storage media. Data Scraping Services of the method varies, depending on medium and method used in the process.

Similarly, patents, models, business strategies and other confidential business information, including sensitive data, can be easily accessed by others if the data is not deleted.As I said in the beginning, Data Scraping Services methods vary depending on the storage medium. For each storage medium, there are a variety of Data Scraping Services techniques.

Optical media such as  that can be destroyed by the plastic granulating. This method does not extract information, but makes recovery almost impossible. However, removal of thin film that coats the top of the disk, scraping, sanding by hand or destroy physical data. In contrast, using the microwave, a less traditional technologies, stable and disk storage layer of the thin film is very effective for the most common cause sparks to load.

Typical modern magnetic media and hard drives, tape backup units of such media is possible, but in the face of such devices requires considerable financial investment in the plant. Acids, in particular, nitric acid, 50% concentration in the iron oxide layer to react with violence, it will be completely destroyed within a few minute. In some cases it may be a storage alternative for incineration. However, this may inadvertently expose caseinogens operator and may be restricted in certain countries.

Data Scraping Services, on the other hand, is defined by Wikipedia as "an automatic search for large stores of data for patterns of practice." In other words, you already know, and you learn things about it useful analysis.

Data Scraping Services is often accompanied by a lot of complex algorithms based on statistical methods. How do you see the data in the first place - is not. Data Scraping Services analysis, you only care about what is already there in many cases, a single-pass binary wipe (to write random zeroes and ones riding) will permanently deletes all data from the storage device to remove.

use of materials recovery.
It is for this reason that the technology has been left until last.
Data Scraping Services, screen scraping is not.
This is a great simplification, so I will work a bit.

Fast-forwarding to the web world today, screen scraping is the information relates to websites. This means that computer programs "crawl" or can "spider" through web sites, data retrieval. people, We deserved pages, text data Scraping Services, automated data collection, data extraction and web site even bloody website if we have a problem it presents some.

Data Scraping Services, on the other hand, is defined by Wikipedia as "an automatic search for large stores of data for patterns of practice." In other words, you already know, and you learn things about it useful analysis. Data Scraping Services is often accompanied by a lot of complex algorithms based on statistical methods. How do you see the data in the first place - is not. Data Scraping Services analysis, you only care about what is already there.

Source:http://www.articlesbase.com/outsourcing-articles/web-data-scraping-services-have-various-method-of-business-5594515.html

Monday, 29 December 2014

Scraping By

In his classic 1976 Chesapeake portrait, Beautiful Swimmers, William Warner described the scrape boat as "a workboat unlike any other I had ever seen on the Bay." Seeming half as wide as it was long, he said, it looked like a "a miniature battleship." There's a reason for that, of course. It's a classic case of form following function; the boat evolved for one purpose, to ply the Bay's grassy shallows for shedding blue crabs.

Said to "float on a heavy dew," scrape boats run from 26 to 30 feet long and 9 to 10 feet wide. The hull is a shallow-V deadrise that quickly flattens toward the stern, enabling the boat to pull its twin scrapes—rectangular steel frames, each with a trailing mesh bag—in knee-deep waters. The broad beam might sound ungainly, but the hull tapers toward the stern—betraying its sailboat origins. And it has a graceful sheer, flowing from a bow height of a few feet to little more than a foot above the water amidships.

And you want a low freeboard when you spend the whole day hoisting aboard scrapes, which weigh 50 pounds apiece, not including the load of sea grass and crabs that come in too. Low sides or not, there's a higher than average inci-dence of back problems among scrape boat crabbers. They spend long days bending in precisely the position back doctors say puts undue pressure on the lower back as they sort through rolls of grasses to pluck out the peelers and softies. And that alone may be why crab potting is now the far more common way of catching soft crabs.

Some people think that's good, assuming that dragging a scrape across the Bay's beleaguered grass flats must be destructive. But the smooth bar of the scrape, unlike a toothed dredge, doesn't uproot grasses. In fact, where scraping is traditional, the grass beds seem relatively resilient. I've often thought if Maryland and Virginia had stuck with scraping as the major legal way to soft-crab, overfishing might not have become a problem. Pots can be deployed everywhere and by the thousands, whereas scraping is limited to grass beds and to ground covered at three miles per hour; and even the sturdiest waterman can only pull two of them by hand. But peeler pots seem here to stay, and other soft crabbers have taken to using a single, large scrape operated from larger workboats by hydraulic power.

The bottom line is that these lovely, superbly functional expressions of Chesapeake crabbing culture now number only in the dozens, if you count working, wooden models. There are some fiberglass scrape boat hulls in service, and a Carolina skiff or two has been adapted for the task. They are functional, but have little art to them.

It is probably a sign of how fast scrape boats are going that the Smithsonian Institution recently took the lines off Darlene, a scraper worked by Morris Marsh of Smith Island, for its archives. You can see photos of scrape boats, and learn more about the 140-year old history of scraping, from Paula Johnson's fine book, The Workboats of Smith Island. Mr. Marsh, still going strong in his late 60s, is the scraper who took Warner out nearly 40 years ago when he was researching Beautiful Swimmers.

Indeed, scraping seems to win over those who master it. Marsh's father-in-law, Ed Harrison, scraped for almost 70 years, nearly wearing through the cross-planked bottom of his boat—from the inside—with decades of walking the planks, tending his scrapes. And an islander who scrapes with Marsh today, David Laird, says he is 71—one year younger than Scotty Boy, the scrape boat he took over from his dad in 1958. "I wouldn't even know how to crab in another boat," Laird says.

Soft crabs may well be caught—or farmed—a century from now on the Chesapeake; but no one will devise a way to take them so intimately and beautifully from the shallowest marsh edges and tiniest crevices in the shore as the scrapers do.

Source:http://www.articlesbase.com/culture-articles/scraping-by-1560919.html

Thursday, 25 December 2014

Choose Mining Wear Parts Wisely

It is important to choose a reputable supplier of mining wear parts; one that has been acknowledged as a leader in mining expertise. You will want to research and seek out a company that specializes in the engineering, manufacturing, procurement and design of mining wear parts and who has access to a multitude of patterns and templates to choose from.

It is vital to find a company that invites you to put them to the test; a company that is committed to selling more than just a product, standing behind the parts that they design and manufacture with an unprecedented industry guarantee. Some companies are so confident in their products that each wear part is stamped with their logo, identifying it as a superior product.

You will also want to find a company that takes pride in establishing strong customer relationships and who employs people who are as equally committed to providing outstanding service with customer satisfaction a priority. Your research will help you find a mining wear parts company that guarantees that if they do not have the part available, that they will find it for you or are capable of custom designing products to your exact specifications.

If you stop to consider the ramifications of an equipment malfunction or breakdown on production quotas, the significance of reliable parts becomes readily apparent. The impact can be far reaching if it halts production while the necessary repairs are completed. The ugly reality is that downtime incurs financial losses.

While the cost of aftermarket replacement mining wear parts is one factor, the installation of the part is equally as important. It is vital that aftermarket parts are built to a rugged standard to endure the rigorous industrial demands placed on them. Mining wear parts are routinely subjected to high stress abrasion and impact. The fabricated parts need to have the structural strength to be wear resistant with extended usage. Hardened manganese is the preferred material of choice to impart added strength and avoid premature breakage and replacement. Using inferior quality parts may result in the necessity of replacing them prematurely if they do not withstand the wear and tear that they are subjected to daily. While a few dollars may be saved initially by purchasing inferior mining wear parts, production costs can dramatically increase if frequent breakdowns occur and manpower hours are wasted in the field. Efficient use of manpower is an important budget consideration. Reliability is an absolute necessity w
hen you have production deadlines to meet and operations can quickly grind to a standstill when production is halted.

Quality assurance management monitors the consistency of the parts, demanding that they are machined within precise measurements. In addition, they focus on striving to improve the quality of parts as new technology becomes available. Using precision made, high quality wear parts can make your business more competitive, giving you an advantage and improving your bottom line.

Source:http://ezinearticles.com/?Choose-Mining-Wear-Parts-Wisely&id=6691631

Monday, 22 December 2014

Scraping table from any web page with R or CloudStat

Scraping table from any web page with R or CloudStat:

You need to use the data from internet, but don’t type, you can just extract or scrape them if you know the web URL.

Thanks to XML package from R. It provides amazing readHTMLtable() function.

For a study case,

I want to scrape data:

    US Airline Customer Score.
    World Top Chess Players (Men).

A. Scraping US Airline Customer Score table from

http://www.theacsi.org/index.php?option=com_content&view=article&id=147&catid=&Itemid=212&i=Airlines

Code:

airline = ‘http://www.theacsi.org/index.php?option=com_content&view=article&id=147&catid=&Itemid=212&i=Airlines’

airline.table = readHTMLTable(airline, header=T, which=1,stringsAsFactors=F)

Result:

> library(XML)

Warning message:

package "XML" was built under R version 2.14.1

> airline = "http://www.theacsi.org/index.php?option=com_content&view=article&id=147&catid=&Itemid=212&i=Airlines"

> airline.table = readHTMLTable(airline, header=T, which=1,stringsAsFactors=F)

> airline.table

                     Base-line 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10

1          Southwest        78 76 76 76 74 72 70 70 74 75 73 74 74 76 79 81 79
2         All Others        NM 70 74 70 62 67 63 64 72 74 73 74 74 75 75 77 75
3           Airlines        72 69 69 67 65 63 63 61 66 67 66 66 65 63 62 64 66
4        Continental        67 64 66 64 66 64 62 67 68 68 67 70 67 69 62 68 71
5           American        70 71 71 62 67 64 63 62 63 67 66 64 62 60 62 60 63
6             United        71 67 70 68 65 62 62 59 64 63 64 61 63 56 56 56 60
7         US Airways        72 67 66 68 65 61 62 60 63 64 62 57 62 61 54 59 62
8              Delta        77 72 67 69 65 68 66 61 66 67 67 65 64 59 60 64 62
9 Northwest Airlines        69 71 67 64 63 53 62 56 65 64 64 64 61 61 57 57 61
  11 PreviousYear%Change FirstYear%Change
1 81                 2.5              3.8
3 65                -1.5             -9.7
4 64                -9.9             -4.5
5 63                 0.0            -10.0
7 61                -1.6            -15.3
8 56                -9.7            -27.3
9  #                 N/A              N/A

>

B. Scraping World Top Chess players (Men) table from http://ratings.fide.com/top.phtml?list=men

Code:

chess = ‘http://ratings.fide.com/top.phtml?list=men’

chess.table = readHTMLTable(chess, header=T, which=5,stringsAsFactors=F)

Result:

> chess = "http://ratings.fide.com/top.phtml?list=men"

> chess.table = readHTMLTable(chess, header=T, which=5,stringsAsFactors=F)

> chess.table

     Rank                       Name Title Country Rating Games B-Year

1      1           Carlsen, Magnus    g    NOR  2835   17  1990
2      2            Aronian, Levon    g    ARM  2805   25  1982
3      3         Kramnik, Vladimir    g    RUS  2801   17  1975
4      4        Anand, Viswanathan    g    IND  2799   17  1969
5      5         Radjabov, Teimour    g    AZE  2773    9  1987
6      6          Topalov, Veselin    g    BUL  2770    9  1975
7      7          Karjakin, Sergey    g    RUS  2769   16  1990
8      8         Ivanchuk, Vassily    g    UKR  2766   16  1969
9      9     Morozevich, Alexander    g    RUS  2763    6  1977
10    10           Gashimov, Vugar    g    AZE  2761    9  1986
11    11       Grischuk, Alexander    g    RUS  2761    8  1983
12    12          Nakamura, Hikaru    g    USA  2759   17  1987
13    13            Svidler, Peter    g    RUS  2749   17  1976
14    14    Mamedyarov, Shakhriyar    g    AZE  2747    9  1985
15    15       Tomashevsky, Evgeny    g    RUS  2740    0  1987
16    16            Gelfand, Boris    g    ISR  2739    9  1968
17    17          Caruana, Fabiano    g    ITA  2736   19  1992
18    18       Nepomniachtchi, Ian    g    RUS  2735   16  1990
19    19                 Wang, Hao    g    CHN  2733    6  1989
20    20              Kamsky, Gata    g    USA  2732    0  1974
21    21  Dominguez Perez, Leinier    g    CUB  2730    6  1983
22    22         Jakovenko, Dmitry    g    RUS  2729    0  1983
23    23        Ponomariov, Ruslan    g    UKR  2727   13  1983
24    24          Vitiugov, Nikita    g    RUS  2726    1  1987
25    25            Adams, Michael    g    ENG  2724   17  1971
26    26               Leko, Peter    g    HUN  2720    9  1979
27    27            Almasi, Zoltan    g    HUN  2717    8  1976
28    28               Giri, Anish    g    NED  2714   15  1994
29    29            Le, Quang Liem    g    VIE  2714    0  1991
30    30             Navara, David    g    CZE  2712    8  1985
31    31            Shirov, Alexei    g    LAT  2710   13  1972
32    32             Polgar, Judit    g    HUN  2710    0  1976
33    33     Riazantsev, Alexander    g    RUS  2710    0  1985
34    34       Wojtaszek, Radoslaw    g    POL  2706    8  1987
35    35      Moiseenko, Alexander    g    UKR  2706    7  1980
36    36   Vallejo Pons, Francisco    g    ESP  2705   15  1982
37    37        Malakhov, Vladimir    g    RUS  2705    0  1980
38    38            Jobava, Baadur    g    GEO  2704   23  1983
39    39           Bacrot, Etienne    g    FRA  2704   14  1983
40    40          Laznicka, Viktor    g    CZE  2704    8  1988
41    41            Sutovsky, Emil    g    ISR  2703    8  1977
42    42        Naiditsch, Arkadij    g    GER  2702   14  1985
43    43         Movsesian, Sergei    g    ARM  2700    9  1978
44    44       Sasikiran, Krishnan    g    IND  2700    9  1981
45    45   Vachier-Lagrave, Maxime    g    FRA  2699   13  1990
46    46            Dreev, Aleksey    g    RUS  2698    6  1969
47    47           Efimenko, Zahar    g    UKR  2695    8  1985
48    48         Volokitin, Andrei    g    UKR  2695    0  1986
49    49                 Wang, Yue    g    CHN  2694    6  1987
50    50        Fressinet, Laurent    g    FRA  2693   17  1981
51    51                Li, Chao b    g    CHN  2693    6  1989
52    52            Grachev, Boris    g    RUS  2693    0  1986
53    53      Nielsen, Peter Heine    g    DEN  2693    0  1973
54    54            Van Wely, Loek    g    NED  2692   13  1972
55    55    Bruzon Batista, Lazaro    g    CUB  2691   19  1982
56    56           McShane, Luke J    g    ENG  2691    8  1984
57    57            Eljanov, Pavel    g    UKR  2690   10  1983
58    58      Kasimdzhanov, Rustam    g    UZB  2689   14  1979
59    59         Inarkiev, Ernesto    g    RUS  2689    6  1985
60    60         Zvjaginsev, Vadim    g    RUS  2688    8  1976
61    61         Andreikin, Dmitry    g    RUS  2688    0  1990
62    62    Areshchenko, Alexander    g    UKR  2688    0  1986
63    63         Rublevsky, Sergei    g    RUS  2686    0  1974
64    64         Akopian, Vladimir    g    ARM  2685    8  1971
65    65          Potkin, Vladimir    g    RUS  2684    0  1982
66    66       Sargissian, Gabriel    g    ARM  2683   15  1983
67    67            Berkes, Ferenc    g    HUN  2682   16  1985
68    68           Bologan, Viktor    g    MDA  2680   15  1971
69    69          Bauer, Christian    g    FRA  2679   24  1977
70    70          Tiviakov, Sergei    g    NED  2677   22  1973
71    71            Short, Nigel D    g    ENG  2677   15  1965
72    72        Motylev, Alexander    g    RUS  2677    6  1979
73    73         Gharamian, Tigran    g    FRA  2676    0  1984
74    74          Kobalia, Mikhail    g    RUS  2673    0  1978
75    75              Meier, Georg    g    GER  2671    9  1987
76    76       Onischuk, Alexander    g    USA  2670   13  1975
77    77              Bu, Xiangzhi    g    CHN  2670    6  1985
78    78          Alekseev, Evgeny    g    RUS  2670    0  1985
79    79            Azarov, Sergei    g    BLR  2667    0  1983
80    80        Kryvoruchko, Yuriy    g    UKR  2666    0  1986
81    81             Balogh, Csaba    g    HUN  2665    8  1987
82    82           Harikrishna, P.    g    IND  2665    6  1986
83    83       Khismatullin, Denis    g    RUS  2664    8  1984
84    84   Nguyen, Ngoc Truong Son    g    VIE  2662    6  1990
85    85           Fridman, Daniel    g    GER  2660   11  1976
86    86              Smirin, Ilia    g    ISR  2660    7  1968
87    87               Ding, Liren    g    CHN  2660    6  1992
88    88         Sadler, Matthew D    g    ENG  2660    3  1974
89    89            Korobov, Anton    g    UKR  2660    0  1985
90    90          Cheparinov, Ivan    g    BUL  2659   18  1986
91    91          Timofeev, Artyom    g    RUS  2659    0  1985
92    92           Georgiev, Kiril    g    BUL  2658   17  1965
93    93           Bartel, Mateusz    g    POL  2658    9  1985
94    94          Zhigalko, Sergei    g    BLR  2658    8  1989
95    95         Feller, Sebastien    g    FRA  2658    0  1991
96    96            Ragger, Markus    g    AUT  2655   17  1988
97    97         Jones, Gawain C B    g    ENG  2653   27  1987
98    98                So, Wesley    g    PHI  2653    5  1993
99    99              Milov, Vadim    g    SUI  2653    0  1972
100  100           Gupta, Abhijeet    g    IND  2652    9  1989
101  101            Postny, Evgeny    g    ISR  2652    8  1981
102  102             Roiz, Michael    g    ISR  2652    6  1983
103  103           Gyimesi, Zoltan    g    HUN  2652    4  1977
104  104          Nikolic, Predrag    g    BIH  2652    2  1960

>

Done. You had successfully scraping data from any web page with R or CloudStat.

Then, you can analyze as usual! Great! No more retype the data. Enjoy!

Source: http://www.r-bloggers.com/scraping-table-from-any-web-page-with-r-or-cloudstat/

Friday, 19 December 2014

Extracting Wisdom Teeth Tips

It is believed that due to evolution, our jaws are now smaller than our ancient ancestors'. For this reason, our mouths often do not have adequate room to accommodate the third molars, making them basically useless and in some cases detrimental. Even if they are not impacted, wisdom teeth may be hard to clean, and therefore require removal to reduce the probability of caries and infection.

As part of your routine dental visits, your dentist will likely take X-rays to monitor the development of your third molars. Your dentist will likely recommend removing them as soon as possible to avoid any complications. The extraction of wisdom teeth can sometimes be a costly and daunting procedure; for these reasons many patients delay having them extracted. However, if the impacted teeth become infected, it is important to see your dental professional at once. Symptoms of infection due to impacted wisdom teeth include;

•    Pain in the gums and surrounding areas
•    Red or inflamed gums
•    Tender or bleeding gums
•    Inflammation around the face and jaw
•    Bad breath (halitosis)
•    Frequent headaches

If a single molar needs to be extracted, local anesthetic will be used. In the case where several or all the teeth need extraction, the patient will usually be "put under" using a general anesthetic. If you have an infection or medical complications that put you at a higher than normal risk, the surgery may be performed at a hospital. Extraction of the wisdom teeth is a day surgery, and patients are usually able to return to normal activities in a day or so. You may be prescribed antibiotics prior to the surgery, and you will likely be asked not to eat or drink the night before the surgery.

During the surgery, your dentist makes an incision in the gum tissue covering the tooth. Once the tooth is exposed, the dentist may cut the tooth into smaller pieces to make extraction easier. After the extraction you will be given stitches to mend the gum tissue. You may need to return a few days later to have the stitches removed. You will be monitored after the surgery to ensure that you are not bleeding excessively.

The best time for extraction is when the patient is in their late teens to avoid unnecessary complications. Wisdom teeth extractions performed later in life are still beneficial, but the removal may be more difficult and healing may take longer. Therefore it is wise to have a conversation with your dentist regarding your wisdom teeth as early as possible.

Most people will experience the emergence of their wisdom teeth at some point in their life, and extraction is sometimes necessary as a preventative measure or to fix an actual problem or to prevent problem. It is best to deal with any problems regarding your wisdom teeth as soon as possible to avoid unnecessary difficulties.

Source:http://ezinearticles.com/?Extracting-Wisdom-Teeth-Tips&id=7788863

Wednesday, 17 December 2014

Importance of Data Mining Services in Business

Data mining is used in re-establishment of hidden information of the data of the algorithms. It helps to extract the useful information starting from the data, which can be useful to make practical interpretations for the decision making.

It can be technically defined as automated extraction of hidden information of great databases for the predictive analysis. In other words, it is the retrieval of useful information from large masses of data, which is also presented in an analyzed form for specific decision-making. Although data mining is a relatively new term, the technology is not. It is thus also known as Knowledge discovery in databases since it grip searching for implied information in large databases.

It is primarily used today by companies with a strong customer focus - retail, financial, communication and marketing organizations. It is having lot of importance because of its huge applicability. It is being used increasingly in business applications for understanding and then predicting valuable data, like consumer buying actions and buying tendency, profiles of customers, industry analysis, etc. It is used in several applications like market research, consumer behavior, direct marketing, bioinformatics, genetics, text analysis, e-commerce, customer relationship management and financial services.

However, the use of some advanced technologies makes it a decision making tool as well. It is used in market research, industry research and for competitor analysis. It has applications in major industries like direct marketing, e-commerce, customer relationship management, scientific tests, genetics, financial services and utilities.

Data mining consists of major elements:

•    Extract and load operation data onto the data store system.
•    Store and manage the data in a multidimensional database system.
•    Provide data access to business analysts and information technology professionals.
•    Analyze the data by application software.
•    Present the data in a useful format, such as a graph or table.

The use of data mining in business makes the data more related in application. There are several kinds of data mining: text mining, web mining, relational databases, graphic data mining, audio mining and video mining, which are all used in business intelligence applications. Data mining software is used to analyze consumer data and trends in banking as well as many other industries.

Source:http://ezinearticles.com/?Importance-of-Data-Mining-Services-in-Business&id=2601221

Tuesday, 16 December 2014

Autoscraping casts a wider net

We have recently started letting more users into the private beta for our Autoscraping service. We’re receiving a lot of applications following the shutdown of Needlebase and we’re increasing our capacity to accommodate these users.

Natalia made a screencast to help our new users get started:

It’s also a great introduction to what this service can do.

We released slybot as an open source integration of the scrapely extraction library and the scrapy framework. This is the core technology behind the autoscraping service and we will make it easy to export autoscraping spiders from Scrapinghub  and run them completely with slybot – allowing our users to have the flexibility and freedom provided by open source.

Source:http://blog.scrapinghub.com/2012/02/27/autoscraping-casts-a-wider-net/

Sunday, 14 December 2014

Local ScraperWiki Library

It quite annoyed me that you can only use the scraperwiki library on a ScraperWiki instance; most of it could work fine elsewhere. So I’ve pulled it out (well, for Python at least) so you can use it offline.

How to use
pip install scraperwiki_local
A dump truck dumping its payload

You can then import scraperwiki in scripts run on your local computer. The scraperwiki.sqlite component is powered by DumpTruck, which you can optionally install independently of scraperwiki_local.

pip install dumptruck
Differences

DumpTruck works a bit differently from (and better than) the hosted ScraperWiki library, but the change shouldn’t break much existing code. To give you an idea of the ways they differ, here are two examples:

Complex cell values
What happens if you do this?
import scraperwiki
shopping_list = ['carrots', 'orange juice', 'chainsaw']
scraperwiki.sqlite.save([], {'shopping_list': shopping_list})
On a ScraperWiki server, shopping_list is converted to its unicode representation, which looks like this:
[u'carrots', u'orange juice', u'chainsaw']
In the local version, it is encoded to JSON, so it looks like this:
["carrots","orange juice","chainsaw"]


And if it can’t be encoded to JSON, you get an error. And when you retrieve it, it comes back as a list rather than as a string.

Case-insensitive column names
SQL is less sensitive to case than Python. The following code works fine in both versions of the library.

In [1]: shopping_list = ['carrots', 'orange juice', 'chainsaw']
In [2]: scraperwiki.sqlite.save([], {'shopping_list': shopping_list})
In [3]: scraperwiki.sqlite.save([], {'sHOpPiNg_liST': shopping_list})
In [4]: scraperwiki.sqlite.select('* from swdata')

Out[4]: [{u'shopping_list': [u'carrots', u'orange juice', u'chainsaw']}, {u'shopping_list': [u'carrots', u'orange juice', u'chainsaw']}]

Note that the key in the returned data is ‘shopping_list’ and not ‘sHOpPiNg_liST’; the database uses the first one that was sent. Now let’s retrieve the individual cell values.

In [5]: data = scraperwiki.sqlite.select('* from swdata')
In [6]: print([row['shopping_list'] for row in data])
Out[6]: [[u'carrots', u'orange juice', u'chainsaw'], [u'carrots', u'orange juice', u'chainsaw']]

The code above works in both versions of the library, but the code below only works in the local version; it raises a KeyError on the hosted version.

In [7]: print(data[0]['Shopping_List'])
Out[7]: [u'carrots', u'orange juice', u'chainsaw']

Here’s why. In the hosted version, scraperwiki.sqlite.select returns a list of ordinary dictionaries. In the local version, scraperwiki.sqlite.select returns a list of special dictionaries that have case-insensitive keys.

Develop locally

Here’s a start at developing ScraperWiki scripts locally, with whatever coding environment you are used to. For a lot of things, the local library will do the same thing as the hosted. For another lot of things, there will be differences and the differences won’t matter.

If you want to develop locally (just Python for now), you can use the local library and then move your script to a ScraperWiki script when you’ve finished developing it (perhaps using Thom Neale’s ScraperWiki scraper). Or you could just run it somewhere else, like your own computer or web server. Enjoy!

Source:https://blog.scraperwiki.com/2012/06/local-scraperwiki-library/

Monday, 8 December 2014

The Hubcast #4: A Guide to Boston, Scraping Local Leads, & Designers.Hubspot.com

The Hubcast Podcast Episode 004

Welcome back to The Hubcast folks! As mentioned last week, this will be a weekly podcast all about HubSpot news, tips, and tricks. Please also note the extensive show notes below including some new HubSpot video tutorials created by George Thomas.

Show Notes:

Inbound 2014

THE INSIDER’S GUIDE TO BOSTON

Boston Guide

On September 15-18, the Boston Convention & Exhibition Center will be filled with sales and marketing professionals for INBOUND 2014. Whether this will be your first time visiting Boston, you’ve visited Boston in the past, or you’ve lived in the city for years, The Insider’s Guide to Boston is your go-to guide for enjoying everything the city has to offer. Click on a persona below to get started.

Are you the The Brewmaster – The Workaholic – The Chillaxer?

Check out the guide here

HubSpot Tips & Tricks

Prospects Tool – Scrape Local Leads
Prospects Tool


This weeks tip / trick is how to silence some of the noise in your prospect tool. Sometimes you might have need to just look at local leads for calls or drop offs. We show you how to do that and much more with the HubSpot Prospects Tool.

Watch the tutorial here

HubSpot Strategy
Crack down on your sites copy.


We talk about how your home page and about pages are talking to your potential customers in all the wrong ways. Are you the me, me, me person at the digital party? Or are you letting people know how their problems can be solved by your products or services.

HubSpot Updates
(Each week on the Hubcast, George and Marcus will be looking at HubSpot’s newest updates to their software. And in this particular episode, we’ll be discussing 2 of their newest updates)
Default Contact Properties

You can now choose a default option on contact properties that sets a default value for that property that can be applied across your entire contacts database. When creating or editing a new contact property in Contacts Settings, you’ll see a new default option next to the labels on properties with field types “Dropdown,” “Radio Select” and “Single On/Off Checkbox”.

Default Contact Properties

When you set a contact property as “default”, all contacts who don’t have any value set for this property will adopt the default value you’ve selected. In the example above, we’re creating a property to track whether your contact uses a new feature. Initially, all of them would be “No,” and that’s the default property that will be applied database-wide. As a result, this’ll get stamped on each contact record the value wasn’t present on.

Now, when you want to apply a contact property across multiple contacts, you don’t have to create a list of those contacts and then create a workflow that stamps that contact property across those contacts. This new feature allows you to bypass those steps by using the “default” option on new contact properties you create.

Watch the tutorial here
RSS Module with Images


Now available is a new option within modules in the template builder that will allow you to easily add a featured image to an RSS module. This module will show a blog post’s featured image next to the feed of recent blog content. If you are a marketer, all you need to do is simply check the “Featured Image” box off in the RSS Listing module to display a list of recent COS blog posts with images on any page. No developers or code necessary to do this!

If you are a designer and want to add additional styling to an RSS module with images, you can do so using HubL tokens.

Here is documentation on how to get started.

Default Contact Properties
Watch the tutorial here

HubSpot Wishlist

 The HubSpot Keywords Tool


Why oh why!!!! Hubspot why can we only have 1,000 keywords in our keywords tool? We talk about how for many companies a 1,000 keywords dont just cut it. For example Yale applaince can easily blow through those keywords.

Source: http://www.thesaleslion.com/hubcast-podcast-004/

Monday, 1 December 2014

Web Scraping’s 2013 Review – part 2

As promised we came back with the second part of this year’s web scraping review. Today we will focus not only on events of 2013 that regarded web scraping but also Big data and what this year meant for this concept.

First of all, we could not talked about the conferences in which data mining was involved without talking about TED conferences. This year the speakers focused on the power of data analysis to help medicine and to prevent possible crises in third world countries. Regarding data mining, everyone agreed that this is one of the best ways to obtain virtual data.

Also a study by MeriTalk  a government IT networking group, ordered by NetApp showed this year that companies are not prepared to receive the informational revolution. The survey found that state and local IT pros are struggling to keep up with data demands. Just 59% of state and local agencies are analyzing the data they collect and less than half are using it to make strategic decisions. State and local agencies estimate that they have just 46% of the data storage and access, 42% of the computing power, and 35% of the personnel they need to successfully leverage large data sets.

Some economists argue that it is often difficult to estimate the true value of new technologies, and that Big Data may already be delivering benefits that are uncounted in official economic statistics. Cat videos and television programs on Hulu, for example, produce pleasure for Web surfers — so shouldn’t economists find a way to value such intangible activity, whether or not it moves the needle of the gross domestic product?

We will end this article with some numbers about the sumptuous growth of data available on the internet.  There were 30 billion gigabytes of video, e-mails, Web transactions and business-to-business analytics in 2005. The total is expected to reach more than 20 times that figure in 2013, with off-the-charts increases to follow in the years ahead, according to researches conducted by Cisco, so as you can see we have good premises to believe that 2014 will be at least as good as 2013.

Source:http://thewebminer.com/blog/2013/12/

Friday, 28 November 2014

Scraping R-bloggers with Python – Part 2

In my previous post I showed how to write a small simple python script to download the pages of R-bloggers.com. If you followed that post and ran the script, you should have a folder on your hard drive with 2409 .html files labeled post1.html , post2.html and so forth. The next step is to write a small script that extract the information we want from each page, and store that information in a .csv file that is easily read by R. In this post I will show how to extract the post title, author name and date of a given post and store it in a .csv file with a unique id.

To do this open a document in your favorite python editor (I like to use aquamacs) and name it: extraction.py. As in the previous post we start by importing the modules that we will use for the extraction:

from BeautifulSoup import BeautifulSoup

import os
import re

As in the previous post we will be using the BeautifulSoup module to extract the relevant information from the pages. The os module is used to get a list of file from the directory where we have saved the .html files, and finally the re module allows us to use regular expressions to format the titles that include a comma value or a newline value (\n). We need to remove these as they would mess up the formatting of the .csv file.

After having read in the modules, we need to get a list of files that we can iterate over. First we need to specify the path were the files are saved, and then we use the os module to get all the filenames in the specified directory:

path = "/Users/thomasjensen/Documents/RBloggersScrape/download"

listing = os.listdir(path)

It might be that there are other files in the given directory, hence we apply a filter, in shape of a list comprehension, to weed out any file names that do not match our naming scheme:

listing = [name for name in listing if re.search(r"post\d+\.html",name) != None]

Notice that a regular expression was used to determine whether a given name in the list matched our naming scheme. For more on regular expressions have a look at this site.

The final steps in preparing our extraction is to change the working directory to where we have our .html files, and create an empty dictionary:

os.chdir(path)
data = {}

Dictionaries are one of the great features of Python. Essentially a dictionary is a mapping of a key to a specific value, however the fact that dictionaries can be nested within each other, allows us to create data structures similar to R’s data frames.

Now we are ready to begin extracting information from our downloaded pages. Much as in the previous post, we will loop over all the file names, read each file into Python and create a BeautifulSoup object from the file:

for page in listing:
    site = open(page,"rb")
    soup = BeautifulSoup(site)

In order to store the values we extract from a given page, we update the dictionary with a unique key for the page. Since our naming scheme made sure that each file had a unique name, we simply remove the .html part from the page name, and use that as our key:

key = re.sub(".html","",page)

data.update({key:{}})

This will create a mapping between our key and an empty dictionary, nested within the data dictionary. Once this is done we can start extract information and store it in our newly created nested dictionary. The content we want is located in the main column, which has the id tag “leftcontent” in the html code. To get at this we use find() function on soup object created above:

content = soup.find("div", id = "leftcontent")

The first “h1” tag in our content object contains the title, so again we will use the find() function on the content object, to find the first “h1” tag:

title = content.findNext("h1").text

To get the text within the “h1” tag the .text had been added to our search with in the content object.

To find the author name, we are lucky that there is a class of “div” tags called “meta” which contain a link with the author name in it. To get the author name we simply find the meta div class and search for a link. Then we pull out the text of the link tag:

author = content.find("div",{"class":"meta"}).findNext("a").text

Getting the date is a simple matter as it is nested within div tag with the class “date”:

date = content.find("div",{"class":"date"}).text

Once we have the three variables we put them in dictionaries that are nested within the nested dictionary we created with the key:

data[key]["title"] = title
data[key]["author"] = author
data[key]["date"] = date

Once we have run the loop and gone through all posts, we need to write them in the right format to a .csv file. To begin with we open a .csv file names output:

output = open("/Users/thomasjensen/Documents/RBloggersScrape/output.csv","wb")

then we create a header that contain the variable names and write it to the output.csv file as the first row:

variables = unicode(",".join(["id","date","author","title"]))
header = variables + "\n"
output.write(header.encode("utf8"))

Next we pull out all the unique keys from our dictionary that represent individual posts:

keys = data.keys()

Now it is a simple matter of looping through all the keys, pull out the information associated with each key, and write that information to the output.csv file:

for key in keys:
    print key
    id = key
    date = re.sub(",","",data[key]["date"])
    author = data[key]["author"]
    title = re.sub(",","",data[key]["title"])
    title = re.sub("\\n","",title)
    linelist = [id,date,author,title]
    linestring = unicode(",".join(linelist))
    linestring = linestring + "\n"
    output.write(linestring.encode("utf-8"))

Notice that we first create four variables that contain the id, date, author and title information. With regards to the title we use two regular expressions to remove any commas and “\n” from the title, as these would create new columns or new line breaks in the output.csv file. Finally we put the variables together in a list, and turn the list into a string with the list items separated by a comma. Then a linebreak is added to the end of the string, and the string is written to the output.csv file. As a last step we close the file connection:

output.close()

And that is it. If you followed the steps you should now have a csv file in your directory with 2409 rows, and four variables – ready to be read into R. Stay tuned for the next post which will show how we can use this data to see how R-bloggers has developed since 2005. The full extraction script is shown below:

from BeautifulSoup import BeautifulSoup

import os
import re

 path = "/Users/thomasjensen/Documents/RBloggersScrape/download"
 listing = os.listdir(path)

listing = [name for name in listing if re.search(r"post\d+\.html",name) != None]
 os.chdir(path)
 data = {}
 for page in listing:
site = open(page,"rb")
soup = BeautifulSoup(site)
key = re.sub(".html","",page)
print key
data.update({key:{}})
 content = soup.find("div", id = "leftcontent")
title = content.findNext("h1").text
author = content.find("div",{"class":"meta"}).findNext("a").text
date = content.find("div",{"class":"date"}).text
data[key]["title"] = title
data[key]["author"] = author
data[key]["date"] = date

 output = open("/Users/thomasjensen/Documents/RBloggersScrape/output.csv","wb")

 keys = data.keys()
 variables = unicode(",".join(["id","date","author","title"]))
 header = variables + "\n"
 output.write(header.encode("utf8"))
 for key in keys:
print key
id = key
date = re.sub(",","",data[key]["date"])
author = data[key]["author"]
title = re.sub(",","",data[key]["title"])
title = re.sub("\\n","",title)
linelist = [id,date,author,title]
linestring = unicode(",".join(linelist))
linestring = linestring + "\n"
output.write(linestring.encode("utf-8"))
 output.close()

Source:http://www.r-bloggers.com/scraping-r-bloggers-with-python-part-2/

Wednesday, 26 November 2014

Data Mining and Frequent Datasets

I've been doing some work for my exams in a few days and I'm going through some past papers but unfortunately there are no corresponding answers. I've answered the question and I was wondering if someone could tell me if I am correct.

My question is

    (c) A transactional dataset, T, is given below:
    t1: Milk, Chicken, Beer
    t2: Chicken, Cheese
    t3: Cheese, Boots
    t4: Cheese, Chicken, Beer,
    t5: Chicken, Beer, Clothes, Cheese, Milk
    t6: Clothes, Beer, Milk
    t7: Beer, Milk, Clothes

    Assume that minimum support is 0.5 (minsup = 0.5).

    (i) Find all frequent itemsets.

Here is how I worked it out:

    Item : Amount
    Milk : 4
    Chicken : 4
    Beer : 5
    Cheese : 4
    Boots : 1
    Clothes : 3

Now because the minsup is 0.5 you eliminate boots and clothes and make a combo of the remaining giving:

    {items} : Amount
    {Milk, Chicken} : 2
    {Milk, Beer} : 4
    {Milk, Cheese} : 1
    {Chicken, Beer} : 3
    {Chicken, Cheese} : 3
    {Beer, Cheese} : 2

Which leaves milk and beer as the only frequent item set then as it is the only one above the minsup?

data mining

Nanor

3 Answers

There are two ways to solve the problem:

    using Apriori algorithm
    Using FP counting

Assuming that you are using Apriori, the answer you got is correct.

The algorithm is simple:

First you count frequent 1-item sets and exclude the item-sets below minimum support.

Then count frequent 2-item sets by combining frequent items from previous iteration and exclude the item-sets below support threshold.

The algorithm can go on until no item-sets are greater than threshold.

In the problem given to you, you only get 1 set of 2 items greater than threshold so you can't move further.

There is a solved example of further steps on Wikipedia here.

You can refer "Data Mining Concepts and Techniques" by Han and Kamber for more examples.

141

There is more than two algorithms to solve this problem. I will just mention a few of them: Apriori, FPGrowth, Eclat, HMine, DCI, Relim, AIM, etc. –  Phil Mar 5 '13 at 7:18

OK to start, you must first understand, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information - information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases.

Now, the amount of raw data stored in corporate databases is exploding. From trillions of point-of-sale transactions and credit card purchases to pixel-by-pixel images of galaxies, databases are now measured in gigabytes and terabytes. (One terabyte = one trillion bytes. A terabyte is equivalent to about 2 million books!) For instance, every day, Wal-Mart uploads 20 million point-of-sale transactions to an A&T massively parallel system with 483 processors running a centralized database.

Raw data by itself, however, does not provide much information. In today's fiercely competitive business environment, companies need to rapidly turn these terabytes of raw data into significant insights into their customers and markets to guide their marketing, investment, and management strategies.

Now you must understand that association rule mining is an important model in data mining. Its mining algorithms discover all item associations (or rules) in the data that satisfy the user-specified minimum support (minsup) and minimum confidence (minconf) constraints. Minsup controls the minimum number of data cases that a rule must cover. Minconf controls the predictive strength of the rule.

Since only one minsup is used for the whole database, the model implicitly assumes that all items in the data are of the same nature and/or have similar frequencies in the data. This is, however, seldom the case in real- life applications. In many applications, some items appear very frequently in the data, while others rarely appear. If minsup is set too high, those rules that involve rare items will not be found. To find rules that involve both frequent and rare items, minsup has to be set very low.

This may cause combinatorial explosion because those frequent items will be associated with one another in all possible ways. This dilemma is called the rare item problem. This paper proposes a novel technique to solve this problem. The technique allows the user to specify multiple minimum supports to reflect the natures of the items and their varied frequencies in the database. In rule mining, different rules may need to satisfy different minimum supports depending on what items are in the rules.

Given a set of transactions T (the database), the problem of mining association rules is to discover all association rules that have support and confidence greater than the user-specified minimum support (called minsup) and minimum confidence (called minconf).

I hope that once you understand the very basics of data mining that the answer to this question shall become apparent.

1

The Apriori algorithm is based on the idea that for a pair o items to be frequent, each individual item should also be frequent. If the hamburguer-ketchup pair is frequent, the hamburger itself must also appear frequently in the baskets. The same can be said about the ketchup.

So for the algorithm, it is established a "threshold X" to define what is or it is not frequent. If an item appears more than X times, it is considered frequent.

The first step of the algorithm is to pass for each item in each basket, and calculate their frequency (count how many time it appears). This can be done with a hash of size N, where the position y of the hash, refers to the frequency of Y.

If item y has a frequency greater than X, it is said to be frequent.

In the second step of the algorithm, we iterate through the items again, computing the frequency of pairs in the baskets. The catch is that we compute only for items that are individually frequent. So if item y and item z are frequent on itselves, we then compute the frequency of the pair. This condition greatly reduces the pairs to compute, and the amount of memory taken.

Once this is calculated, the frequencies greater than the threshold are said frequent itemset.

Source: http://stackoverflow.com/questions/14164853/data-mining-and-frequent-datasets?rq=1

Monday, 24 November 2014

4 Data Mining Tips to Scrap Real Estate Data; Innovative Way to Give Realty Business a boost!

Internet has become a huge source of data – in fact; it has turned into a goldmine for the marketers, from where they can easily dig the useful data!

    Web scraping has become a norm in today’s competitive era, where one with maximum and relevant information wins the race!

Real Estate Data Extraction and Scraping Service

It has helped many industries to carve a niche in the market; especially real estate – Scraping real estate data has been of great help for professionals to reach out to a large number of people and gather reliable property data. However, there are some people for whom web scraping is still an alien concept; most probably because most of its advantages are not discussed.

    There are institutions, companies and organizations, entrepreneurs, as well as just normal citizens generating an extraordinary amount of information every day. Property information extraction can be effectively used to get an idea about the customer psyche and even generate valuable lead to further the business.

In addition to this, data mining has also some of following uses making it an indispensable part of marketing.

Gather Properties Details from Different Geographical Locations

You are an estate agent and want to expand your business to the neighboring city or state. But, then you are short of information. You are completely aware of the properties in the vicinity and in your town; however, with data mining services will help you to get an idea about the properties in the other state. You can also approach probable clients and increase your database to offer extensive services.

Online Offers and Discounts are just a Click Away

Now, it is tough to deal with the clients, show them the property of their choice and again act as a mediator between the buyer and seller. In all this, it becomes almost difficult to take a look at some special discounts or offers. With the data mining services, you can get an insight into these amazing offers. Thus, you can plan a move or even provide your client an amazing deal.

What people are talking about – Easy Monitoring of your Online Reputation

Internet has become a melting pot where different people come together. In fact, it provides a huge platform where people discuss about their likes and dislikes. When you dig into such online forums, you can get an idea of reputation that you or your firm holds. You can know what people think about you and where you require to buck up and where you need to slow down.

A Chance to Know your Competitors Better!

Last, but not the least, you can keep an eye on the competitor.  Real Estate is getting more competitive; and therefore, it is important to have knowledge about your competitors to get an upper hand. It will help you to plan your moves and strategize with more ease. Moreover, you also know what is that “something” that your competitor does not have and you have, with can be subtly highlighted.

Property information extraction can prove to be the most fruitful method to get a cutting edge in the industry.

Source: http://www.hitechbposervices.com/blog/4-data-mining-tips-to-scrap-real-estate-data-innovative-way-to-give-realty-business-a-boost/

Wednesday, 19 November 2014

Web Scraping for Fun & Profit

There’s a number of ways to retrieve data from a backend system within mobile projects. In an ideal world, everything would have a RESTful JSON API – but often, this isn’t the case.Sometimes, SOAP is the language of the backend. Sometimes, it’s some proprietary protocol which might not even be HTTP-based. Then, there’s scraping.

Retrieving information from web sites as a human is easy. The page communicates information using stylistic elements like headings, tables and lists – this is the communication protocol of the web. Machines retrieve information with a focus on structure rather than style, typically using communication protocols like XML or JSON. Web scraping attempts to bridge this human protocol into a machine-readable format like JSON. This is what we try to achieve with web scraping.

As a means of getting to data, it don’t get much worse than web scraping. Scrapers were often built with Regular Expressions to retrieve the data from the page. Difficult to craft, impossible to maintain, this means of retrieval was far from ideal. The risks are many – even the slightest layout change on a web page can upset scraper code, and break the entire integration. It’s a fragile means for building integrations, but sometimes it’s the only way.

Having built a scraper service recently, the most interesting observation for me is how far we’ve come from these “dark days”. Node.js, and the massive ecosystem of community built modules has done much to change how these scraper services are built.

Effectively Scraping Information

Websites are built on the Document Object Model, or DOM. This is a tree structure, which represents the information on a page.By interpreting the source of a website as a DOM, we can retrieve information much more reliably than using methods like regular expression matching. The most popular method of querying the DOM is using jQuery, which enables us to build powerful and maintainable queries for information. The JSDom Node module allows us to use a DOM-like structure in serverside code.

For purpose of Illustration, we’re going to scrape the blog page of FeedHenry’s website. I’ve built a small code snippet that retrieves the contents of the blog, and translates it into a JSON API. To find the queries I need to run, first I need to look at the HTML of the page. To do this, in Chrome, I right-click the element I’m looking to inspect on the page, and click “Inspect Element”.

Screen Shot 2014-09-30 at 10.44.38

Articles on the FeedHenry blog are a series of ‘div’ elements with the ‘.itemContainer’ class

Searching for a pattern in the HTML to query all blog post elements, we construct the `div.itemContainer` query. In jQuery, we can iterate over these using the .each method:

var posts = [];

$('div.itemContainer').each(function(index, item){

  // Make JSON objects of every post in here, pushing to the posts[] array

});

From there, we pick off the heading, author and post summary using a child selector on the original post, querying the relevant semantic elements:

    Post Title, using jQuery:

    $(item).find('h3').text()trim() // trim, because titles have white space either side

    Post Author, using jQuery:

    $(item).find('.catItemAuthor a').text()

    Post Body, using jQuery:

    $(item).find('p').text()

Adding some JSDom magic to our snippet, and pulling together the above two concept (iterating through posts, and picking off info from each post), we get this snippet:

var request = require('request'),

jsdom = require('jsdom');

jsdom.env(

  "http://www.feedhenry.com/category/blog",

  ["http://code.jquery.com/jquery.js"],

  function (errors, window) {

    var $ = window.$, // Alias jQUery

    posts = [];

    $('div.itemContainer').each(function(index, item){

      item = $(item); // make queryable in JQ

      posts.push({

        heading : item.find('h3').text().trim(),

        author : item.find('.catItemAuthor a').text(),

        teaser : item.find('p').text()

      });

    });

    console.log(posts);

  }

);

A note on building CSS Queries

As with styling web sites with CSS, building effective CSS queries is equally as important when building a scraper. It’s important to build queries that are not too specific, or likely to break when the structure of the page changes. Equally important is to pick a query that is not too general, and likely to select extra data from the page you don’t want to retrieve.

A neat trick for generating the relevant selector statement is to use Chrome’s “CSS Path” feature in the inspector. After finding the element in the inspector panel, right click, and select “Copy CSS Path”. This method is good for individual items, but for picking repeating patterns (like blog posts), this doesn’t work though. Often, the path it gives is much too specific, making for a fragile binding. Any changes to the page’s structure will break the query.

Making a Re-usable Scraping Service

Now that we’ve retrieved information from a web page, and made some JSON, let’s build a reusable API from this. We’re going to make a FeedHenry Blog Scraper service in FeedHenry3. For those of you not familiar with service creation, see this video walkthrough.

We’re going to start by creating a “new mBaaS Service”, rather than selecting one of the off-the-shelf services. To do this, we modify the application.js file of our service to include one route, /blog, which includes our code snippet from earlier:

// just boilerplate scraper setup

var mbaasApi = require('fh-mbaas-api'),

express = require('express'),

mbaasExpress = mbaasApi.mbaasExpress(),

cors = require('cors'),

request = require('request'),

jsdom = require('jsdom');

var app = express();

app.use(cors());

app.use('/sys', mbaasExpress.sys([]));

app.use('/mbaas', mbaasExpress.mbaas);

app.use(mbaasExpress.fhmiddleware());

// Our /blog scraper route

app.get('/blog', function(req, res, next){

  jsdom.env(

    "http://www.feedhenry.com/category/blog",

    ["http://code.jquery.com/jquery.js"],

    function (errors, window) {

      var $ = window.$, // Alias jQUery

      posts = [];

      $('div.itemContainer').each(function(index, item){

        item = $(item); // make queryable in JQ

        posts.push({

          heading : item.find('h3').text().trim(),

          author : item.find('.catItemAuthor a').text(),

          teaser : item.find('p').text()

        });

      });

      return res.json(posts);

    }

  );

});

app.use(mbaasExpress.errorHandler());

var port = process.env.FH_PORT || process.env.VCAP_APP_PORT || 8001;

var server = app.listen(port, function() {});

We’re also going to write some documentation for our service, so we (and other developers) can interact with it using the FeedHenry discovery console. We’re going to modify the README.md file to document what we’ve just done using API Blueprint documentation format:

# FeedHenry Blog Web Scraper

This is a feedhenry blog scraper service. It uses the `JSDom` and `request` modules to retrieve the contents of the FeedHenry developer blog, and parse the content using jQuery.

# Group Scraper API Group

# blog [/blog]

Blog Endpoint

## blog [GET]

Get blog posts endpoint, returns JSON data.

+ Response 200 (application/json)

    + Body

            [{ blog post}, { blog post}, { blog post}]

We can now try out the scraper service in the studio, and see the response:

Scraping – The Ultimate in API Creation?

Now that I’ve described some modern techniques for effectively scraping data from web sites, it’s time for some major caveats. First,  WordPress blogs like ours already have feeds and APIs available to developers - there’s no need to ever scrape any of this content. Web Scraping is not a replacement for an API. It should be used only as a last resort, after every endeavour to discover an API has already been made. Using a web scraper in a commercial setting requires much time set aside to maintain the queries, and an agreement with the source data is being scraped on to alert developers in the event the page changes structure.

With all this in mind, it can be a useful tool to iterate quickly on an integration when waiting for an API, or as a fun hack project.

Source: http://www.feedhenry.com/web-scraping-fun-profit/

Tuesday, 18 November 2014

Get started with screenscraping using Google Chrome’s Scraper extension

How do you get information from a website to a Excel spreadsheet? The answer is screenscraping. There are a number of softwares and plattforms (such as OutWit Hub, Google Docs and Scraper Wiki) that helps you do this, but none of them are – in my opinion – as easy to use as the Google Chrome extension Scraper, which has become one of my absolutely favourite data tools.

What is a screenscraper?

I like to think of a screenscraper as a small robot that reads websites and extracts pieces of information. When you are able to unleash a scraper on hundreads, thousands or even more pages it can be an incredibly powerful tool.

In its most simple form, the one that we will look at in this blog post, it gathers information from one webpage only.

Google Chrome’s Scraper

Scraper is an Google Chrome extension that can be installed for free at Chrome Web Store.

Image

Now if you installed the extension correctly you should be able to see the option “Scrape similar” if you right-click any element on a webpage.

The Task: Scraping the contact details of all Swedish MPs

Image

This is the site we’ll be working with, a list of all Swedish MPs, including their contact details. Start by right-clicking the name of any person and chose Scrape similar. This should open the following window.

Understanding XPaths

At w3schools you’ll find a broader introduction to XPaths.

Before we move on to the actual scrape, let me briefly introduce XPaths. XPath is a language for finding information in an XML structure, for example an HTML file. It is a way to select tags (or rather “nodes”) of interest. In this case we use XPaths to define what parts of the webpage that we want to collect.

A typical XPath might look something like this:

    //div[@id="content"]/table[1]/tr

Which in plain English translates to:

    // - Search the whole document...

    div[@id="content"] - ...for the div tag with the id "content".

    table[1] -  Select the first table.

    tr - And in that table, grab all rows.

Over to Scraper then. I’m given the following suggested XPath:

    //section[1]/div/div/div/dl/dt/a

The results look pretty good, but it seems we only get names starting with an A. And we would also like to collect to phone numbers and party names. So let’s go back to the webpage and look at the HTML structure.

Right-click one of the MPs and chose Inspect element. We can see that each alphabetical list is contained in a section tag with the class “grid_6 alpha omega searchresult container clist”.

 And if we open the section tag we find the list of MPs in div tags.

We will do this scrape in two steps. Step one is to select the tags containing all information about the MPs with one XPath. Step two is to pick the specific pieces of data that we are interested in (name, e-mail, phone number, party) and place them in columns.

Writing our XPaths

In step one we want to try to get as deep into the HTML structure as possible without losing any of the elements we are interested in. Hover the tags in the Elements window to see what tags correspond to what elements on the page.

In our case this is the last tag that contains all the data we are looking for:

    //section[@class="grid_6 alpha omega searchresult container clist"]/div/div/div/dl

Click Scrape to test run the XPath. It should give you a list that looks something like this.

Scroll down the list to make sure it has 349 rows. That is the number of MPs in the Swedish parliament. The second step is to split this data into columns. Go back to the webpage and inspect the HTML code.

I have highlighted the parts that we want to extract. Grab them with the following XPaths:

    name: dt/a
    party: dd[1]
    region: dd[2]/span[1]
    seat: dd[2]/span[2]
    phone: dd[3]
    e-mail: dd[4]/span/a

Insert these paths in the Columns field and click Scrape to run the scraper.

Click Export to Google Docs to get the data into a spreadsheet.

Source: http://dataist.wordpress.com/2012/10/12/get-started-with-screenscraping-using-google-chromes-scraper-extension/

Sunday, 16 November 2014

Screen-scraping with WWW::Mechanize

Screen-scraping is the process of emulating an interaction with a Web site - not just downloading pages, but filling out forms, navigating around the site, and dealing with the HTML received as a result. As well as for traditional lookups of information - like the example we'll be exploring in this article - we can use screen-scraping to enhance a Web service into doing something the designers hadn't given us the power to do in the first place. Here's an example:

I do my banking online, but get quickly bored with having to go to my bank's site, log in, navigate around to my accounts and check the balance on each of them. One quick Perl module (Finance::Bank::HSBC) later, and now I can loop through each of my accounts and print their balances, all from a shell prompt. Some more code, and I can do something the bank's site doesn't ordinarily let me - I can treat my accounts as a whole instead of individual accounts, and find out how much money I have, could possibly spend, and owe, all in total.

Another step forward would be to schedule a crontab every day to use the HSBC option to download a copy of my transactions in Quicken's QIF format, and use Simon Cozens' Finance::QIF module to interpret the file and run those transactions against a budget, letting me know whether I'm spending too much lately. This takes a simple Web-based system from being merely useful to being automated and bespoke; if you can think of how to write the code, then you can do it. (It's probably wise for me to add the caveat, though, that you should be extremely careful working with banking information programatically, and even more careful if you're storing your login details in a Perl script somewhere.)

Back to screen-scrapers, and introducing WWW::Mechanize, written by Andy Lester and based on Skud's WWW::Automate. Mechanize allows you to go to a URL and explore the site, following links by name, taking cookies, filling in forms and clicking "submit" buttons. We're also going to use HTML::TokeParser to process the HTML we're given back, which is a process I've written about previously.

The site I've chosen to demonstrate on is the BBC's Radio Times site, which allows users to create a "Diary" for their favorite TV programs, and will tell you whenever any of the programs is showing on any channel. Being a London Perl M[ou]nger, I have an obsession with Buffy the Vampire Slayer. If I tell this to the BBC's site, then it'll tell me when the next episode is, and what the episode name is - so I can check whether it's one I've seen before. I'd have to remember to log into their site every few days to check whether there was a new episode coming along, though. Perl to the rescue! Our script will check to see when the next episode is and let us know, along with the name of the episode being shown.

Here's the code:

  #!/usr/bin/perl -w
  use strict;
  use WWW::Mechanize;
  use HTML::TokeParser;

If you're going to run the script yourself, then you should register with the Radio Times site and create a diary, before giving the e-mail address you used to do so below.

  my $email = ";
  die "Must provide an e-mail address" unless $email ne ";

We create a WWW::Mechanize object, and tell it the address of the site we'll be working from. The Radio Times' front page has an image link with an ALT text of "My Diary", so we can use that to get to the right section of the site:

  my $agent = WWW::Mechanize->new();
  $agent->get("http://www.radiotimes.beeb.com/");
  $agent->follow("My Diary");

The returned page contains two forms - one to allow you to choose from a list box of program types, and then a login form for the diary function. We tell WWW::Mechanize to use the second form for input. (Something to remember here is that WWW::Mechanize's list of forms, unlike an array in Perl, is indexed starting at 1 rather than 0. Our index is, therefore,'2.')

  $agent->form(2);

Now we can fill in our e-mail address for the '<INPUT name="email" type="text">' field, and click the submit button. Nothing too complicated.

  $agent->field("email", $email);
  $agent->click();

WWW::Mechanize moves us to our diary page. This is the page we need to process to find the date details from. Upon looking at the HTML source for this page, we can see that the HTML we need to work through is something like:

  <input>
  <tr><td></td></tr>
  <tr><td></td><td></td><td class="bluetext">Date of episode</td></tr>
  <td></td><td></td>
  <td class="bluetext"><b>Time of episode</b></td></tr>
  <a href="page_with_episode_info"></a>

This can be modeled with HTML::TokeParser as below. The important methods to note are get_tag - which will move the stream on to the next opening for the tag given - and get_trimmed_text, which returns the text between the current and given tags. For example, for the HTML code "<b>Bold text here</b>", my $tag = get_trimmed_text("/b") would return "Bold text here" to $tag.

Also note that we're initializing HTML::TokeParser on '\$agent->{content}' - this is an internal variable for WWW::Mechanize, exposing the HTML content of the current page.

  my $stream = HTML::TokeParser->new(\$agent->{content});
  my $date;
    # <input>
  $stream->get_tag("input");
  # <tr><td></td></tr><tr>
  $stream->get_tag("tr"); $stream->get_tag("tr");
  # <td></td><td></td>
  $stream->get_tag("td"); $stream->get_tag("td");
  # <td class="bluetext">Date of episode</td></tr>
  my $tag = $stream->get_tag("td");
  if ($tag->[1]{class} and $tag->[1]{class} eq "bluetext") {
      $date = $stream->get_trimmed_text("/td");
      # The date contains '&nbsp;', which we'll translate to a space.
      $date =~ s/\xa0/ /g;
  }
   # <td></td><td></td>
  $stream->get_tag("td");
  # <td class="bluetext"><b>Time of episode</b> 
  $tag = $stream->get_tag("td");
  if ($tag->[1]{class} eq "bluetext") {
      $stream->get_tag("b");
      # This concatenates the time of the showing to the date.
      $date .= ", from " . $stream->get_trimmed_text("/b");
  }
  # </td></tr><a href="page_with_episode_info"></a>
  $tag = $stream->get_tag("a");
  # Match the URL to find the page giving episode information.
  $tag->[1]{href} =~ m!src=(http://.*?)'!;

We have a scalar, $date, containing a string that looks something like "Thursday 23 January, from 6:45pm to 7:30pm.", and we have an URL, in $1, that will tell us more about that episode. We tell WWW::Mechanize to go to the URL:

  $agent->get($1);

The navigation we want to perform on this page is far less complex than on the last page, so we can avoid using a TokeParser for it - a regular expression should suffice. The HTML we want to parse looks something like this:

  <br><b>Episode</b><br>  The Episode Title<br>

We use a regex delimited with '!' in order to avoid having to escape the slashes present in the HTML, and store any number of alphanumeric characters after some whitespace, all between <br> tags after the Episode header:

  $agent->{content} =~ m!<br><b>Episode</b><br>\s+?(\w+?)<br>!;

$1 now contains our episode, and all that's left to do is print out what we've found:

  my $episode = $1;
  print "The next Buffy episode ($episode) is on $date.\n";

And we're all set. We can run our script from the shell:

  $ perl radiotimes.pl

  The next Buffy episode (Gone) is Thursday Jan. 23, from 6:45 to 7:30 p.m.
I hope this gives a light-hearted introduction to the usefulness of the modules involved. As a note for your own experiments, WWW::Mechanize supports cookies - in that the requestor is a normal LWP::UserAgent object - but they aren't enabled by default. If you need to support cookies, then your script should call "use HTTP::Cookies; $agent->cookie_jar(HTTP::Cookies->new);" on your agent object in order to enable session-volatile cookies for your own code.
Happy screen-scraping, and may you never miss a Buffy episode again.

Source: http://www.perl.com/pub/2003/01/22/mechanize.html

Friday, 14 November 2014

Big Data Democratization via Web Scraping

Big Data Democratization via Web Scraping

If  we had to put democratization of data inline with the classroom definition of democracy, it would read- Data by the people, for the people, of the people. Makes a lot of sense, doesn’t it? It resonates with the generic feeling we have these days with respect to easy access to data for our daily tasks. Thanks to the internet revolution, and now the social media.

Big-data-crawling

Big Data web Crawling

By the people- most of the public data on the web is a user group’s sentiments, analyses and other information.

Of the people- Although the “of” here does not literally mean that the data is owned, all such data on the internet either relates to the user group itself or its views on things.

For the people- Most of this data is presented via channels (either social media, news, etc.) for public benefit be it travel tips, daily news feeds, product price comparisons, etc.

Essentially, data democratization has come to mean that by leveraging cloud computing, data that’s mostly user-generated on the internet has become accessible by all industries- big or small for their own internal use (commercial or not). This democratization has been put to use for unearthing hidden patterns from big blobs of datasets. Use cases have evolved with the consumer internet landscape and Big Data is now being used for various other means quite unanticipated.

With respect to the democratization, we’ve also heard enough about how data analytics is paving way beyond data analysts within companies and becoming available to even the non-tech-savvies. But did anyone mention DaaS providers who aid in the very first phase of data acquisition? Data scraping or web crawling (whatever your lingo is) has come to become an indivisible part of data democratization, especially when talking large-scale. The first step into bringing the public data to use is acquiring it which is where setting up web crawlers internally or partnering with DaaS providers comes to play. This blog guides towards making a choice. Its not always all the data that companies crunch or should crunch from the web. There’s obviously certain channels that are of more interest to the community than the rest and there lies the barrier- to identify sources of higher ROI and acquire data in a machine-readable format.

DaaS providers usually come to help with the entire data acquisition pipeline- starting from picking the right sources through crawl, extraction, dedup as well as data normalization based on specific requirements. Once the data has been acquired, its most likely published on another channel. Such network effect bolsters the democracy.

Steps in Data Acquisition Pipeline

crawl-extract-norm

Note- PromptCloud only delivers structured data as per the schema provided.

So while democratization may refer to easy access of computing resources in order to draw patterns from Big Data, it could also be analogous to ensuring right data in the right format at right intervals. In fact, DaaS providers have themselves used this democracy to empower it further.

Source:https://www.promptcloud.com/blog/big-data-democratization-using-web-scraping-2/

Wednesday, 12 November 2014

Why Businesses Need Data Scraping Service?

With the ever-increasing popularity of internet technology there is an abundance of knowledge processing information that can be used as gold if used in a structured format. We all know the importance of information. It has indeed become a valuable commodity and most sought after product for businesses. With widespread competition in businesses there is always a need to strive for better performances.

Taking this into consideration web data scraping service has become an inevitable component of businesses as it is highly useful in getting relevant information which is accurate. In the initial periods data scraping process included copying and pasting data information which was not relevant because it required intensive labor and was very costly. But now with the help of new data scraping tools like Mozenda, it is possible to extract data from websites easily. You can also take the help of data scrapers and data mining experts that scrape the data and automatically keep record of it.

How Professional Data Scraping Companies and Data Mining Experts Device a Solution?

Data Scraping Plan and Solutions

ImageCredit:http://www.loginworks.com/images/newscapingpage/data-as-service-plan.png

Why Data Scraping is Highly Essential for Businesses?

Data scraping is highly essential for every industry especially Hospitality, eCommerce, Research and Development, Healthcare, Financial and data scraping can be useful in marketing industry, real estate industry by scraping properties, agents, sites etc., travel and tourism industry etc. The reason for that is it is one of those industries where there is cut-throat competition and with the help of data scraping tools it is possible to extract useful information pertaining to preferences of customers, their preferred location, strategies of your competitors etc.

It is very important in today’s dynamic business world to understand the requirements of your customers and their preferences. This is because customers are the king of the market they determine the demand. Web data scraping process will help you in getting this vital information. It will help you in making crucial decisions which are highly critical for the success of business. With the help of data scraping tools you can automate the data scraping process which can result in increased productivity and accuracy.

Reasons Why Businesses Opt. For Website Data Scraping Solutions:

Website Scraping
Demand For New Data:

There is an overflowing demand for new data for businesses across the globe. This is due to increase in competition. The more information you have about your products, competitors, market etc. the better are your chances of expanding and persisting in competitive business environment. The manner in which data extraction process is followed is also very important; as mere data collection is useless. Today there is a need for a process through which you can utilize the information for the betterment of the business. This is where data scraping process and data scraping tools come into picture.

ImageCredit:3idatascraping.com
Capitalize On Hot Updates:

Today simple data collection is not enough to sustain in the business world. There is a need for getting up to date information. There are times when you will have the information pertaining to the trends in the market for your business but they would not be updated. During such times you will lose out on critical information. Hence; today in businesses it is a must to have recent information at your disposal.

The more recent update you have pertaining to the services of your business the better it is for your growth and sustenance. We are already seeing lot of innovation happening in the field of businesses hence; it is very important to be on your toes and collect relevant information with the help of data scrapers. With the help of data scrapping tools you can stay abreast with the latest developments in your business albeit; by spending extra money but it is necessary tradeoff in order to grow in your business or be left behind like a laggard.

Analyzing Future Demands:

Foreknowledge about the various major and minor issues of your industry will help you in assessing the future demand of your product / service. With the help of data scraping process; data scrapers can gather information pertaining to possibilities in business or venture you are involved in. You can also remain alert for changes, adjustments, and analysis of all aspects of your products and services.

Appraising Business:

It is very important to regularly analyze and evaluate your businesses. For that you need to evaluate whether the business goals have been met or not. It is important for businesses to know about your own performance. For example; for your businesses if the world market decides to lower the prices in order to grow their customer base you need to be prepared whether you can remain in the industry despite lowering the price. This can be done only with the help of data scraping process and data scraping tools.

Source:http://www.habiledata.com/blog/why-businesses-need-data-scraping-service