Pig Latin Operators

Pig Latin provides a number of operators which filter, join, or otherwise organize data.

FOREACH: The FOREACH command operates on each element of a data bag. This is useful, for instance, for processing each input record in a bag returned by a LOAD statement.

FOREACH bagname GENERATE expression, expression...

This statement iterates over the contents of a bag. It applies the expressions on the right of the GENERATE keyword to the data provided by the current record emitted from the bag. The expressions may be, for example, the names of fields. So to extract the names of all users who accessed the site (based on the query_log.txt example shown above), we could write a query like:

FOREACH queries GENERATE userId;

In the FOREACH statement, each element of the bag is considered independently. There are no expressions which reference multiple elements being extracted from the bag's iterator at a time; this allows the statement to be processed in parallel using Hadoop MapReduce.

Expressions emitted by the GENERATE element are not limited to the names of fields; they can be fields (by name like userId or by position like $0), constants, algebraic operations, map lookups, conditional expressions, or FLATTEN expressions, described below.

Finally, these expressions may also call user-provided functions that are written in Java. These user-provided functions have access to the entire current record through a Pig library; in this way, Pig can be used as the heavy-lifting component to automate record-by-record mapping using an application-specific Java function to perform tricky parsing or evaluation logic. Pig also provides several of the most commonly-needed functions, such as COUNT, AVG, MIN, MAX, and SUM.

FLATTEN is an expression which will eliminate a level of nesting. Given a tuple which contains a bag, FLATTEN will emit several tuples each of which contains one record from the bag. For example, if we had a bag of records containing a person's name and a list of types of pets they own:

(Alice, { turtle, goldfish, cat })
(Bob, { dog, cat })

A FLATTEN command would eliminate the inner bags like so:

(Alice, turtle)
(Alice, goldfish)
(Alice, cat)
(Bob, dog)
(Bob, cat)

FILTER statements iterate over a bag and return a new bag containing all elements which pass a conditional expression, e.g.:

adults = FILTER people BY age > 21;

The COGROUP and JOIN operations perform similar functions: they unite related data elements from multiple data sets. The difference is that JOIN acts like the SQL JOIN statement, creating a flat set of output records containing the joined cross-product of the input records. The COGROUP operator, on the other hand, groups the elements by their common field and returns a set of records each containing two separate bags. The first bag is the records of the first data set with the common field, and the second bag is the records of the second data set containing the common field.

To illustrate the difference, suppose we had the flattened data set mapping people to their pets, and another flattened data set mapping people to their friends. We could create a "pets of friends" data set out of these like the following. Here are the input data sets:

pets: (owner, pet)
----------------------
(Alice, turtle)
(Alice, goldfish)
(Alice, cat)
(Bob, dog)
(Bob, cat)

friends: (friend1, friend2)
----------------------
(Cindy, Alice)
(Mark, Alice)
(Paul, Bob)

Here is what is returned by COGROUP:

COGROUP pets BY owner, friends BY friend2; returns:

( Alice, {(Alice, turtle), (Alice, goldfish), (Alice, cat)},
{(Cindy, Alice), (Mark, Alice)} )
( Bob, {(Bob, dog), (Bob, cat)}, {(Paul, Bob)} )

Contrasted with the more familiar, non-hierarchical JOIN operator:

JOIN pets BY owner, friends BY friend2; returns:

(Alice, turtle, Cindy)
(Alice, turtle, Mark)
(Alice, goldfish, Cindy)
(Alice, goldfish, Mark)
(Alice, cat, Cindy)
(Alice, cat, Mark)
(Bob, dog, Paul)
(Bob, cat, Paul)

In general, COGROUP command supports grouping on as many data sets as are desired. Three or more data sets can be joined in this fashion. It is also possible to group up elements of only a single data set; this is supported through an alternate keyword, GROUP.

A GROUP ... BY statement will organize a bag of records into bags of related items based on the field identified as their common key field. e.g., the pets bag from the previous example could be grouped up with:

GROUP pets BY owner; returns:

( Alice, {(Alice, turtle), (Alice, goldfish), (Alice, cat)} )
( Bob, {(Bob, dog), (Bob, cat)} )

In this way, GROUP and FLATTEN are effectively inverses of one another.

More complicated statements can be realized as well: operations which expect a data set as input do not need to use an explicitly-named data set; they can use one generated "inline" with another FILTER, GROUP or other statement.

When the final data set has been created by a Pig Latin script, the output can be saved to a file with the STORE command, which follows the form:

STORE data set INTO 'filename' USING function()

The provided function specifies how to serialize the data to the file; if it is omitted, then a default serializer will write plain-text tab-delimited files.

A number of additional operators exist for the purposes of removing duplicate records, sorting records, etc. This paper explains the additional operators and expression syntaxes in greater detail.

31 comments:

  1. Thanks for such an article. You can find word count program in pig script at:

    word count program in pig script

    ReplyDelete
  2. Nice Tutorial. http://pigtutorial.blogspot.in/2014/01/setting-up-eclipse-for-apache-pig-and.html will get you started with pig setup in eclipse

    ReplyDelete
  3. Hadoop is creating more opportunities to every one. And thanks for sharing best information about hadoop in this blog Hadoop Tutorial
    Hadoop Tutorial

    ReplyDelete
  4. Thanks for InformationHadoop Course will provide the basic concepts of MapReduce applications developed using Hadoop, including a close look at framework components, use of Hadoop for a variety of data analysis tasks, and numerous examples of Hadoop in action. This course will further examine related technologies such as Hive, Pig, and Apache Accumulo. HADOOP Online Training

    ReplyDelete
  5. Thanku soo much for sharing this valuable information.Really hadoop will makes you to pay your way to good growth.Recently I visited www.hadooponlinetutor.com,they are offering the videos at $20 only.

    ReplyDelete
  6. Thank you so much for sharing this worthwhile to spent time on. You are running a really awesome blog. Keep up this good work Big Data Training

    ReplyDelete
  7. Oracle Training in chennai | Oracle D2K Training In chennai
    This information is impressive; I am inspired with your post writing style & how continuously you describe this topic. After reading your post, thanks for taking the time to discuss this, I feel happy about it and I love learning more about this topic..

    ReplyDelete
  8. Thanks for Information Oracle Apps Technical is a collection of a bunch of collected applications like accounts payables, purchasing, inventory, accounts receivables, human resources, order management, general ledger and fixed assets, etc which have its own functionality for serving the business
    Oracle Apps Training In Chennai

    ReplyDelete
  9. This information is impressive; I am inspired with your post writing style & how continuously you describe this topic. After reading your post, thanks for taking the time to discuss this, I feel happy about it and I love learning more about this topic..
    Selenium Training in Chennai | QTP Training in Chennai

    ReplyDelete
  10. Learning new technology would give oneself a true confidence in the current emerging Information Technology domain. With the knowledge of big data the most magnificent cloud computing technology one can go the peek of data processing. As there is a drastic improvement in this field everyone are showing much interest in pursuing this technology. Your content tells the same about evolving technology. Thanks for sharing this.

    Hadoop Training in Chennai | Big Data Training in Chennai | Big Data Training Chennai | Big Data Training

    ReplyDelete
  11. I have finally found a Worth able content to read. The way you have presented information here is quite impressive. I have bookmarked this page for future use. Thanks for sharing content like this once again. Keep sharing content like this.

    Software testing training in chennai | Software testing course | Manual testing training in Chennai

    ReplyDelete
  12. There is a huge demand for professional big data analysts who are able to use the software which is used to process the big data in order to get accurate results. MNC's are looking for professionals who can process their data so that they can get into a accurate business decision which would eventually help them to earn more profits, they can serve their customers better, and their risk is lowered.
    big data training in chennai|big data training|big data course in chennai|big data training chennai|big data hadoop training in chennai

    ReplyDelete
  13. Really awesome blog. Your blog is really useful for me. Thanks for sharing this informative blog. Keep update your blog.
    Oracle Training In Chennai

    ReplyDelete
  14. Salesforce.com is an american company which offfers CRM based cloud services and it is loved globally for it quality services
    salesforce training in chennai|salesforce training institute in chennai | salesforce course in chennai

    ReplyDelete
  15. SAS stands for statistical analysis system which is a analysis tool developed by SAS institute and with the help of this tool data driven decisions can be taken which is helpful for the bsuiness.
    SAS training in Chennai | SAS course in Chennai | SAS training institute in Chennai

    ReplyDelete
  16. Thanks a lot for letting me a chance to visit your any pointers. Your article about web design is really impressed me very much.ios applications development

    ReplyDelete
  17. Great Tutorial with important information about Pig! Pig is a high-level platform for creating MapReduce programs used with Hadoop. I am Hadoop Developer. I will share you a link https://goo.gl/rrChA2 just have looks. I hope it will help who are looking for Hadoop.

    Thank you

    ReplyDelete
  18. This comment has been removed by the author.

    ReplyDelete
  19. Amazing content.If you are interested instudying nodejs visit this website. Nodejs is an open source, server side web application that enables you to build fast and scalable web application that is capable of running large number of simultaneous connections that has high throughput.
    Node js Training in Chennai | Node JS training institute in chennai

    ReplyDelete
  20. This is a great inspiring article.I am pretty much pleased with your good work.You put really very helpful information..

    Chennai Bigdata Training

    ReplyDelete
  21. Thanks for sharing the information very useful info about Hadoop and keep updating us, Please........

    ReplyDelete
  22. This is excellent information. It is amazing and wonderful to visit your site.Thanks for sharing this information,this is useful to me...
    Android Training in Chennai
    Ios Training in Chennai

    ReplyDelete
  23. Use schemas to assign types to fields. If you don't assign types, fields default to type byte array and implicit conversions are applied to the data depending on the context in which that data is used.If want to do learning from Selenium automation testing to reach us Besant technologies.They Provide at real-time Selenium Automation Testing.
    Selenium Training in Chennai
    Selenium Training Institute in Chennai

    ReplyDelete