Packages

package streaming

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. All

Type Members

  1. final class JavaKinesisWordCountASL extends AnyRef

    Consumes messages from a Amazon Kinesis streams and does wordcount.

    Consumes messages from a Amazon Kinesis streams and does wordcount.

    This example spins up 1 Kinesis Receiver per shard for the given stream. It then starts pulling from the last checkpointed sequence number of the given stream.

    Usage: JavaKinesisWordCountASL [app-name] [stream-name] [endpoint-url] [region-name] [app-name] is the name of the consumer app, used to track the read data in DynamoDB [stream-name] name of the Kinesis stream (i.e. mySparkStream) [endpoint-url] endpoint of the Kinesis service (e.g. https://kinesis.us-east-1.amazonaws.com)

    Example: # export AWS keys if necessary $ export AWS_ACCESS_KEY_ID=[your-access-key] $ export AWS_SECRET_ACCESS_KEY=<your-secret-key>

    # run the example $ SPARK_HOME/bin/run-example streaming.JavaKinesisWordCountASL myAppName mySparkStream \ https://kinesis.us-east-1.amazonaws.com

    There is a companion helper class called KinesisWordProducerASL which puts dummy data onto the Kinesis stream.

    This code uses the DefaultAWSCredentialsProviderChain to find credentials in the following order: Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY Java System Properties - aws.accessKeyId and aws.secretKey Credential profiles file - default location (~/.aws/credentials) shared by all AWS SDKs Instance profile credentials - delivered through the Amazon EC2 metadata service For more information, see https://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/credentials.html

    See https://spark.apache.org/docs/latest/streaming-kinesis-integration.html for more details on the Kinesis Spark Streaming integration.

Value Members

  1. object KinesisWordCountASL extends Logging

    Consumes messages from a Amazon Kinesis streams and does wordcount.

    Consumes messages from a Amazon Kinesis streams and does wordcount.

    This example spins up 1 Kinesis Receiver per shard for the given stream. It then starts pulling from the last checkpointed sequence number of the given stream.

    Usage: KinesisWordCountASL <app-name> <stream-name> <endpoint-url> <region-name> <app-name> is the name of the consumer app, used to track the read data in DynamoDB <stream-name> name of the Kinesis stream (i.e. mySparkStream) <endpoint-url> endpoint of the Kinesis service (e.g. https://kinesis.us-east-1.amazonaws.com)

    Example: # export AWS keys if necessary $ export AWS_ACCESS_KEY_ID=<your-access-key> $ export AWS_SECRET_ACCESS_KEY=<your-secret-key>

    # run the example $ SPARK_HOME/bin/run-example streaming.KinesisWordCountASL myAppName mySparkStream \ https://kinesis.us-east-1.amazonaws.com

    There is a companion helper class called KinesisWordProducerASL which puts dummy data onto the Kinesis stream.

    This code uses the DefaultAWSCredentialsProviderChain to find credentials in the following order: Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY Java System Properties - aws.accessKeyId and aws.secretKey Credential profiles file - default location (~/.aws/credentials) shared by all AWS SDKs Instance profile credentials - delivered through the Amazon EC2 metadata service For more information, see http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/credentials.html

    See https://spark.apache.org/docs/latest/streaming-kinesis-integration.html for more details on the Kinesis Spark Streaming integration.

  2. object KinesisWordProducerASL

    Usage: KinesisWordProducerASL <stream-name> <endpoint-url> \ <records-per-sec> <words-per-record>

    Usage: KinesisWordProducerASL <stream-name> <endpoint-url> \ <records-per-sec> <words-per-record>

    <stream-name> is the name of the Kinesis stream (i.e. mySparkStream) <endpoint-url> is the endpoint of the Kinesis service (i.e. https://kinesis.us-east-1.amazonaws.com) <records-per-sec> is the rate of records per second to put onto the stream <words-per-record> is the number of words per record

    Example: $ SPARK_HOME/bin/run-example streaming.KinesisWordProducerASL mySparkStream \ https://kinesis.us-east-1.amazonaws.com 10 5

Ungrouped