com.ning.metrics.serialization.hadoop.pig
Class ThriftStorage

java.lang.Object
  extended by org.apache.pig.LoadFunc
      extended by com.ning.metrics.serialization.hadoop.pig.ThriftStorage
All Implemented Interfaces:
org.apache.pig.LoadMetadata

public class ThriftStorage
extends org.apache.pig.LoadFunc
implements org.apache.pig.LoadMetadata


Constructor Summary
ThriftStorage(String schemaName)
           
ThriftStorage(String schemaName, String goodwillHost, int goodwillPort)
           
 
Method Summary
 org.apache.hadoop.mapreduce.InputFormat getInputFormat()
          This will be called during planning on the front end.
 org.apache.pig.data.Tuple getNext()
           
 String[] getPartitionKeys(String location, org.apache.hadoop.mapreduce.Job job)
          Find what columns are partition keys for this input.
 org.apache.pig.ResourceSchema getSchema(String location, org.apache.hadoop.mapreduce.Job job)
          Get a schema for the data to be loaded.
 org.apache.pig.ResourceStatistics getStatistics(String location, org.apache.hadoop.mapreduce.Job job)
          Get statistics about the data to be loaded.
 void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader, org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit split)
          Initializes LoadFunc for reading data.
 void setLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the loader the location of the object(s) being loaded.
 void setPartitionFilter(org.apache.pig.Expression partitionFilter)
          Set the filter for partitioning.
 
Methods inherited from class org.apache.pig.LoadFunc
getAbsolutePath, getLoadCaster, getPathStrings, join, relativeToAbsolutePath, setUDFContextSignature
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

ThriftStorage

public ThriftStorage(String schemaName)
              throws IOException
Throws:
IOException

ThriftStorage

public ThriftStorage(String schemaName,
                     String goodwillHost,
                     int goodwillPort)
              throws IOException
Throws:
IOException
Method Detail

setLocation

public void setLocation(String location,
                        org.apache.hadoop.mapreduce.Job job)
                 throws IOException
Communicate to the loader the location of the object(s) being loaded. The location string passed to the LoadFunc here is the return value of LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path). Implementations should use this method to communicate the location (and any other information) to its underlying InputFormat through the Job object.

This method will be called in the backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls.

Specified by:
setLocation in class org.apache.pig.LoadFunc
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - the Job object store or retrieve earlier stored information from the org.apache.pig.impl.util.UDFContext
Throws:
IOException - if the location is not valid.

getInputFormat

public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
                                                       throws IOException
This will be called during planning on the front end. This is the instance of InputFormat (rather than the class name) because the load function may need to instantiate the InputFormat in order to control how it is constructed.

Specified by:
getInputFormat in class org.apache.pig.LoadFunc
Returns:
the InputFormat associated with this loader.
Throws:
IOException - if there is an exception during InputFormat construction

prepareToRead

public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
                          org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit split)
                   throws IOException
Initializes LoadFunc for reading data. This will be called during execution before any calls to getNext. The RecordReader needs to be passed here because it has been instantiated for a particular InputSplit.

Specified by:
prepareToRead in class org.apache.pig.LoadFunc
Parameters:
reader - RecordReader to be used by this instance of the LoadFunc
split - The input PigSplit to process
Throws:
IOException - if there is an exception during initialization

getNext

public org.apache.pig.data.Tuple getNext()
                                  throws IOException
Specified by:
getNext in class org.apache.pig.LoadFunc
Throws:
IOException

getSchema

public org.apache.pig.ResourceSchema getSchema(String location,
                                               org.apache.hadoop.mapreduce.Job job)
                                        throws IOException
Get a schema for the data to be loaded.

Specified by:
getSchema in interface org.apache.pig.LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
schema for the data to be loaded. This schema should represent all tuples of the returned data. If the schema is unknown or it is not possible to return a schema that represents all returned data, then null should be returned. The schema should not be affected by pushProjection, ie. getSchema should always return the original schema even after pushProjection
Throws:
IOException - if an exception occurs while determining the schema

getStatistics

public org.apache.pig.ResourceStatistics getStatistics(String location,
                                                       org.apache.hadoop.mapreduce.Job job)
                                                throws IOException
Get statistics about the data to be loaded. If no statistics are available, then null should be returned.

Specified by:
getStatistics in interface org.apache.pig.LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
statistics about the data to be loaded. If no statistics are available, then null should be returned.
Throws:
IOException - if an exception occurs while retrieving statistics

getPartitionKeys

public String[] getPartitionKeys(String location,
                                 org.apache.hadoop.mapreduce.Job job)
                          throws IOException
Find what columns are partition keys for this input.

Specified by:
getPartitionKeys in interface org.apache.pig.LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
array of field names of the partition keys. Implementations should return null to indicate that there are no partition keys
Throws:
IOException - if an exception occurs while retrieving partition keys

setPartitionFilter

public void setPartitionFilter(org.apache.pig.Expression partitionFilter)
                        throws IOException
Set the filter for partitioning. It is assumed that this filter will only contain references to fields given as partition keys in getPartitionKeys. So if the implementation returns null in getPartitionKeys(String, org.apache.hadoop.mapreduce.Job), then this method is not called by Pig runtime. This method is also not called by the Pig runtime if there are no partition filter conditions.

Specified by:
setPartitionFilter in interface org.apache.pig.LoadMetadata
Parameters:
partitionFilter - that describes filter for partitioning
Throws:
IOException - if the filter is not compatible with the storage mechanism or contains non-partition fields.


Copyright © 2011. All Rights Reserved.