Class

com.krux.hyperion.aws

AdpHiveCopyActivity

Related Doc: package aws

Permalink

case class AdpHiveCopyActivity(id: String, name: Option[String], filterSql: Option[String], generatedScriptsPath: Option[String], input: Option[AdpRef[AdpDataNode]], output: Option[AdpRef[AdpDataNode]], preActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]], postActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]], workerGroup: Option[String], runsOn: Option[AdpRef[AdpEmrCluster]], dependsOn: Option[Seq[AdpRef[AdpActivity]]], precondition: Option[Seq[AdpRef[AdpPrecondition]]], onFail: Option[Seq[AdpRef[AdpSnsAlarm]]], onSuccess: Option[Seq[AdpRef[AdpSnsAlarm]]], onLateAction: Option[Seq[AdpRef[AdpSnsAlarm]]], attemptTimeout: Option[String], lateAfterTimeout: Option[String], maximumRetries: Option[String], retryDelay: Option[String], failureAndRerunMode: Option[String], maxActiveInstances: Option[String]) extends AdpDataPipelineAbstractObject with AdpActivity with Product with Serializable

ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-hivecopyactivity.html

filterSql

A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy. The filter should only contain predicates and not begin with a WHERE clause, because AWS Data Pipeline adds it automatically.

generatedScriptsPath

An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information. This script is stored for troubleshooting purposes.

input

The input data node. This must be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.

output

The output data node. If input is S3DataNode, this must be DynamoDBDataNode. Otherwise, this can be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.

Source
AdpActivities.scala
Linear Supertypes
Serializable, Serializable, Product, Equals, AdpActivity, AdpDataPipelineObject, AdpDataPipelineAbstractObject, AdpObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AdpHiveCopyActivity
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AdpActivity
  7. AdpDataPipelineObject
  8. AdpDataPipelineAbstractObject
  9. AdpObject
  10. AnyRef
  11. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AdpHiveCopyActivity(id: String, name: Option[String], filterSql: Option[String], generatedScriptsPath: Option[String], input: Option[AdpRef[AdpDataNode]], output: Option[AdpRef[AdpDataNode]], preActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]], postActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]], workerGroup: Option[String], runsOn: Option[AdpRef[AdpEmrCluster]], dependsOn: Option[Seq[AdpRef[AdpActivity]]], precondition: Option[Seq[AdpRef[AdpPrecondition]]], onFail: Option[Seq[AdpRef[AdpSnsAlarm]]], onSuccess: Option[Seq[AdpRef[AdpSnsAlarm]]], onLateAction: Option[Seq[AdpRef[AdpSnsAlarm]]], attemptTimeout: Option[String], lateAfterTimeout: Option[String], maximumRetries: Option[String], retryDelay: Option[String], failureAndRerunMode: Option[String], maxActiveInstances: Option[String])

    Permalink

    filterSql

    A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy. The filter should only contain predicates and not begin with a WHERE clause, because AWS Data Pipeline adds it automatically.

    generatedScriptsPath

    An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information. This script is stored for troubleshooting purposes.

    input

    The input data node. This must be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.

    output

    The output data node. If input is S3DataNode, this must be DynamoDBDataNode. Otherwise, this can be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. val attemptTimeout: Option[String]

    Permalink

    The timeout time interval for an object attempt.

    The timeout time interval for an object attempt. If an attempt does not complete within the start time plus this time interval, AWS Data Pipeline marks the attempt as failed and your retry settings determine the next steps taken.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. val dependsOn: Option[Seq[AdpRef[AdpActivity]]]

    Permalink

    One or more references to other Activities that must reach the FINISHED state before this activity will start.

    One or more references to other Activities that must reach the FINISHED state before this activity will start.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. val failureAndRerunMode: Option[String]

    Permalink

    Determines whether pipeline object failures and rerun commands cascade through pipeline object dependencies

    Determines whether pipeline object failures and rerun commands cascade through pipeline object dependencies

    Possible values include cascade and none.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  10. val filterSql: Option[String]

    Permalink

    A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy.

    A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy. The filter should only contain predicates and not begin with a WHERE clause, because AWS Data Pipeline adds it automatically.

  11. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. val generatedScriptsPath: Option[String]

    Permalink

    An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information.

    An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information. This script is stored for troubleshooting purposes.

  13. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  14. val id: String

    Permalink

    The ID of the object, IDs must be unique within a pipeline definition

    The ID of the object, IDs must be unique within a pipeline definition

    Definition Classes
    AdpHiveCopyActivityAdpDataPipelineObjectAdpObject
  15. val input: Option[AdpRef[AdpDataNode]]

    Permalink

    The input data node.

    The input data node. This must be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.

  16. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  17. val lateAfterTimeout: Option[String]

    Permalink

    The time period in which the object run must start.

    The time period in which the object run must start. If the object does not start within the scheduled start time plus this time interval, it is considered late

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  18. val maxActiveInstances: Option[String]

    Permalink

    The maximum number of concurrent active instances of a component.

    The maximum number of concurrent active instances of a component. Re-runs do not count toward the number of active instances.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  19. val maximumRetries: Option[String]

    Permalink

    The maximum number of times to retry the action.

    The maximum number of times to retry the action. The default value is 2, which results in 3 tries total (1 original attempt plus 2 retries). The maximum value is 5 (6 total attempts).

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  20. val name: Option[String]

    Permalink

    The optional, user-defined label of the object.

    The optional, user-defined label of the object. If you do not provide a name for an object in a pipeline definition, AWS Data Pipeline automatically duplicates the value of id.

    Definition Classes
    AdpHiveCopyActivityAdpDataPipelineObjectAdpDataPipelineAbstractObject
  21. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  22. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  23. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  24. val onFail: Option[Seq[AdpRef[AdpSnsAlarm]]]

    Permalink

    The SNS alarm to raise when the activity fails.

    The SNS alarm to raise when the activity fails.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  25. val onLateAction: Option[Seq[AdpRef[AdpSnsAlarm]]]

    Permalink

    The SNS alarm to raise when the activity fails to start on time.

    The SNS alarm to raise when the activity fails to start on time.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  26. val onSuccess: Option[Seq[AdpRef[AdpSnsAlarm]]]

    Permalink

    The SNS alarm to raise when the activity succeeds.

    The SNS alarm to raise when the activity succeeds.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  27. val output: Option[AdpRef[AdpDataNode]]

    Permalink

    The output data node.

    The output data node. If input is S3DataNode, this must be DynamoDBDataNode. Otherwise, this can be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.

  28. val postActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]]

    Permalink
  29. val preActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]]

    Permalink
  30. val precondition: Option[Seq[AdpRef[AdpPrecondition]]]

    Permalink

    A condition that must be met before the object can run.

    A condition that must be met before the object can run. To specify multiple conditions, add multiple precondition fields. The activity cannot run until all its conditions are met.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  31. val retryDelay: Option[String]

    Permalink

    The timeout duration between two retry attempts.

    The timeout duration between two retry attempts. The default is 10 minutes.

    Definition Classes
    AdpHiveCopyActivityAdpActivity
  32. val runsOn: Option[AdpRef[AdpEmrCluster]]

    Permalink
  33. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  34. val type: String

    Permalink

    The type of object.

    The type of object. Use one of the predefined AWS Data Pipeline object types.

    Definition Classes
    AdpHiveCopyActivityAdpDataPipelineObject
  35. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. val workerGroup: Option[String]

    Permalink

    The worker group.

    The worker group. This is used for routing tasks. If you provide a runsOn value and workerGroup exists, workerGroup is ignored.

    Definition Classes
    AdpHiveCopyActivityAdpActivity

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AdpActivity

Inherited from AdpDataPipelineObject

Inherited from AdpObject

Inherited from AnyRef

Inherited from Any

Ungrouped