Pig Tutorial

The Pig tutorial shows you how to run two Pig scripts in Local mode and Hadoop mode.

Local Mode: To run the scripts in local mode, no Hadoop or HDFS installation is required. All files are installed and run from your local host and file system.


Hadoop Mode: To run the scripts in hadoop (mapreduce) mode, you need access to a Hadoop cluster and HDFS installation available through Hadoop Virtual Machine provided with this tutorial.


Java Installation (Note: already set-up on the Hadoop VM.)

1.Java 1.6.x (from Sun) is installed on /usr/jre16.
2.The JAVA_HOME environment variable is set the root of your Java installation in "/home/hadoop-user/.profile" file.

Pig Installation (Note: already set-up on the Hadoop VM.)

1.The pig.jar and tutorial files are stored in "/home/hadoop-user/pig" directory.
2.The PIGDIR environment variable is set to "/home/hadoop-user/pig/"in the .profile file of hadoop-user.

Pig Scripts: Local Mode

To run the Pig scripts in local mode, do the following:

1.Go to the /home/hadoop-user/pig directory on Hadoop VM.
2.Review Pig Script 1 and Pig Script 2.
3.Execute the following command (using either script1-local.pig or script2-local.pig).

$ java -cp $PIGDIR/pig.jar org.apache.pig.Main -x local script1-local.pig

4.Review the result file (either script1-local-results.txt or script2-local-results.txt):

$ ls -l script1-local-results.txt
$ cat script1-local-results.txt

Pig Scripts: Hadoop Mode

To run the Pig scripts in hadoop (mapreduce) mode, do the following:

1.Go to the /home/hadoop-user/pig directory on Hadoop VM.
2.Review Pig Script 1 and Pig Script 2.
3.Copy the excite.log.bz2 file from the pigtmp directory to the HDFS directory.

$ hadoop fs –copyFromLocal excite.log.bz2 .

4.The HADOOPSITEPATH environment variable is set to the location of your hadoop-site.xml file i.e. "/home/hadoop-user/hadoop-tutorial-conf/" directory.
5.Execute the following command (using either script1-hadoop.pig or script2-hadoop.pig):

$ java -cp $PIGDIR/pig.jar:$HADOOPSITEPATH org.apache.pig.Main script1-hadoop.pig

6.Review the result files (located in either the script1-hadoop-results or script2-hadoop-results HDFS directory):

$ hadoop fs -ls script1-hadoop-results
$ hadoop fs -cat 'script1-hadoop-results/*' | less


Pig Script 1: Query Phrase Popularity

The Query Phrase Popularity script (script1-local.pig or script1-hadoop.pig) processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day.

The script is shown here:

*Register the tutorial JAR file so that the included UDFs can be called in the script.

REGISTER ./tutorial.jar;

*Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields user, time, and query.

raw = LOAD 'excite.log' USING PigStorage('\t') AS (user, time, query);

*Call the NonURLDetector UDF to remove records if the query field is empty or a URL.

clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);

*Call the ToLower UDF to change the query field to lowercase.

clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;

*Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour (HH) from the time field.

houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;

*Call the NGramGenerator UDF to compose the n-grams of the query.

ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;

*Use the DISTINCT command to get the unique n-grams for all records.

ngramed2 = DISTINCT ngramed1;

*Use the GROUP command to group records by n-gram and hour.

hour_frequency1 = GROUP ngramed2 BY (ngram, hour);

*Use the COUNT function to get the count (occurrences) of each n-gram.

hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;

*Use the GROUP command to group records by n-gram only. Each group now corresponds to a distinct n-gram and has the count for each hour.

uniq_frequency1 = GROUP hour_frequency2 BY group::ngram;

*For each group, identify the hour in which this n-gram is used with a particularly high frequency. Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram.

uniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));

*Use the FOREACH-GENERATE command to assign names to the fields.

uniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;

*Use the FILTER command to move all records with a score less than or equal to 2.0.

filtered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;

*Use the ORDER command to sort the remaining records by hour and score.

ordered_uniq_frequency = ORDER filtered_uniq_frequency BY (hour, score);

*Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: hour, ngram, score, count, mean.

STORE ordered_uniq_frequency INTO '/tmp/tutorial-results' USING PigStorage();

Pig Script 2: Temporal Query Phrase Popularity

The Temporal Query Phrase Popularity script (script2-local.pig or script2-hadoop.pig) processes a search query log file from the Excite search engine and compares the occurrence of frequency of search phrases across two time periods separated by twelve hours.

The script is shown here:

*Register the tutorial JAR file so that the user-defined functions (UDFs) can be called in the script.

REGISTER ./tutorial.jar;

*Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields user, time, and query.

raw = LOAD 'excite.log' USING PigStorage('\t') AS (user, time, query);

*Call the NonURLDetector UDF to remove records if the query field is empty or a URL.

clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);

*Call the ToLower UDF to change the query field to lowercase.

clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;

*Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour from the time field.

houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;

*Call the NGramGenerator UDF to compose the n-grams of the query.

ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;

*Use the DISTINCT command to get the unique n-grams for all records.

ngramed2 = DISTINCT ngramed1;

*Use the GROUP command to group the records by n-gram and hour.

hour_frequency1 = GROUP ngramed2 BY (ngram, hour);

*Use the COUNT function to get the count (occurrences) of each n-gram.

hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;

*Use the FOREACH-GENERATE command to assign names to the fields.

hour_frequency3 = FOREACH hour_frequency2 GENERATE $0 as ngram, $1 as hour, $2 as count;

*Use the FILTER command to get the n-grams for hour ‘00’

hour00 = FILTER hour_frequency2 BY hour eq '00';

*Uses the FILTER command to get the n-grams for hour ‘12’

hour12 = FILTER hour_frequency3 BY hour eq '12';

*Use the JOIN command to get the n-grams that appear in both hours.

same = JOIN hour00 BY $0, hour12 BY $0;

*Use the FOREACH-GENERATE command to record their frequency.

same1 = FOREACH same GENERATE hour_frequency2::hour00::group::ngram as ngram, $2 as count00, $5 as count12;

*Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: hour, count00, count12.

STORE same1 INTO '/tmp/tutorial-join-results' USING PigStorage();

8 comments:

  1. Nice blog.Thanks for sharing the information.Recently I bouught the hadoop videos from www.hadooponlinetutor.com.the videos are really good.

    ReplyDelete
  2. Your new valuable key points imply much a person like me and extremely more to my office workers. With thanks; from every one of us.
    apple service center chennai | Mac service center in chennai | ipod service center in chennai | Apple laptop service center in chennai

    ReplyDelete
  3. Thanks for your great and helpful presentation I like your good service. I always appreciate your post. That is very interesting I love reading and I am always searching for informative information like this.angular 4 training in chennai | angularjs training in chennai | angularjs best training center in chennai | angularjs training in velachery |

    ReplyDelete
  4. nice
    Web Designing Course in Hyderabad,ameerpet
    http://www.argiatechnologies.com

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Thanks for sharing this article...best visited palace in Jaipur and book your taxi for Japur city tour.

    ReplyDelete