Class

com.krux.hyperion.aws

AdpHiveActivity

Related Doc: package aws

Permalink

class AdpHiveActivity extends AdpDataPipelineAbstractObject with AdpActivity

ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-hiveactivity.html

Source
AdpActivities.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AdpHiveActivity
  2. AdpActivity
  3. AdpDataPipelineObject
  4. AdpDataPipelineAbstractObject
  5. AdpObject
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AdpHiveActivity(id: String, name: Option[String], hiveScript: Option[String], scriptUri: Option[String], scriptVariable: Option[Seq[String]], stage: Option[String], input: Option[Seq[AdpRef[AdpDataNode]]], output: Option[Seq[AdpRef[AdpDataNode]]], hadoopQueue: Option[String], preActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]], postActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]], workerGroup: Option[String], runsOn: Option[AdpRef[AdpEmrCluster]], dependsOn: Option[Seq[AdpRef[AdpActivity]]], precondition: Option[Seq[AdpRef[AdpPrecondition]]], onFail: Option[Seq[AdpRef[AdpSnsAlarm]]], onSuccess: Option[Seq[AdpRef[AdpSnsAlarm]]], onLateAction: Option[Seq[AdpRef[AdpSnsAlarm]]], attemptTimeout: Option[String], lateAfterTimeout: Option[String], maximumRetries: Option[String], retryDelay: Option[String], failureAndRerunMode: Option[String], maxActiveInstances: Option[String])

    Permalink

    hiveScript

    The Hive script to run.

    scriptUri

    The location of the Hive script to run. For example, s3://script location.

    scriptVariable

    Specifies script variables for Amazon EMR to pass to Hive while running a script. For example, the following example script variables would pass a SAMPLE and FILTER_DATE variable to Hive: SAMPLE=s3://elasticmapreduce/samples/hive-ads and FILTER_DATE=#{format(@scheduledStartTime,'YYYY-MM-dd')}% This field accepts multiple values and works with both script and scriptUri fields. In addition, scriptVariable functions regardless of stage set to true or false. This field is especially useful to send dynamic values to Hive using AWS Data Pipeline expressions and functions. For more information, see Pipeline Expressions and Functions.

    stage

    Determines whether staging is enabled. Not permitted with Hive 11, so use an Amazon EMR AMI version 3.2.0 or greater.

    input

    The input data source. Data node object reference Yes

    output

    The location for the output. Data node object reference Yes

    runsOn

    The Amazon EMR cluster to run this activity. EmrCluster object reference Yes

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. val attemptTimeout: Option[String]

    Permalink

    The timeout time interval for an object attempt.

    The timeout time interval for an object attempt. If an attempt does not complete within the start time plus this time interval, AWS Data Pipeline marks the attempt as failed and your retry settings determine the next steps taken.

    Definition Classes
    AdpHiveActivityAdpActivity
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. val dependsOn: Option[Seq[AdpRef[AdpActivity]]]

    Permalink

    One or more references to other Activities that must reach the FINISHED state before this activity will start.

    One or more references to other Activities that must reach the FINISHED state before this activity will start.

    Definition Classes
    AdpHiveActivityAdpActivity
  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  10. val failureAndRerunMode: Option[String]

    Permalink

    Determines whether pipeline object failures and rerun commands cascade through pipeline object dependencies

    Determines whether pipeline object failures and rerun commands cascade through pipeline object dependencies

    Possible values include cascade and none.

    Definition Classes
    AdpHiveActivityAdpActivity
  11. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  13. val hadoopQueue: Option[String]

    Permalink
  14. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  15. val hiveScript: Option[String]

    Permalink

    The Hive script to run.

  16. val id: String

    Permalink

    The ID of the object, IDs must be unique within a pipeline definition

    The ID of the object, IDs must be unique within a pipeline definition

    Definition Classes
    AdpHiveActivityAdpDataPipelineObjectAdpObject
  17. val input: Option[Seq[AdpRef[AdpDataNode]]]

    Permalink

    The input data source.

    The input data source. Data node object reference Yes

  18. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  19. val lateAfterTimeout: Option[String]

    Permalink

    The time period in which the object run must start.

    The time period in which the object run must start. If the object does not start within the scheduled start time plus this time interval, it is considered late

    Definition Classes
    AdpHiveActivityAdpActivity
  20. val maxActiveInstances: Option[String]

    Permalink

    The maximum number of concurrent active instances of a component.

    The maximum number of concurrent active instances of a component. Re-runs do not count toward the number of active instances.

    Definition Classes
    AdpHiveActivityAdpActivity
  21. val maximumRetries: Option[String]

    Permalink

    The maximum number of times to retry the action.

    The maximum number of times to retry the action. The default value is 2, which results in 3 tries total (1 original attempt plus 2 retries). The maximum value is 5 (6 total attempts).

    Definition Classes
    AdpHiveActivityAdpActivity
  22. val name: Option[String]

    Permalink

    The optional, user-defined label of the object.

    The optional, user-defined label of the object. If you do not provide a name for an object in a pipeline definition, AWS Data Pipeline automatically duplicates the value of id.

    Definition Classes
    AdpHiveActivityAdpDataPipelineObjectAdpDataPipelineAbstractObject
  23. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  24. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  25. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  26. val onFail: Option[Seq[AdpRef[AdpSnsAlarm]]]

    Permalink

    The SNS alarm to raise when the activity fails.

    The SNS alarm to raise when the activity fails.

    Definition Classes
    AdpHiveActivityAdpActivity
  27. val onLateAction: Option[Seq[AdpRef[AdpSnsAlarm]]]

    Permalink

    The SNS alarm to raise when the activity fails to start on time.

    The SNS alarm to raise when the activity fails to start on time.

    Definition Classes
    AdpHiveActivityAdpActivity
  28. val onSuccess: Option[Seq[AdpRef[AdpSnsAlarm]]]

    Permalink

    The SNS alarm to raise when the activity succeeds.

    The SNS alarm to raise when the activity succeeds.

    Definition Classes
    AdpHiveActivityAdpActivity
  29. val output: Option[Seq[AdpRef[AdpDataNode]]]

    Permalink

    The location for the output.

    The location for the output. Data node object reference Yes

  30. val postActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]]

    Permalink
  31. val preActivityTaskConfig: Option[AdpRef[AdpShellScriptConfig]]

    Permalink
  32. val precondition: Option[Seq[AdpRef[AdpPrecondition]]]

    Permalink

    A condition that must be met before the object can run.

    A condition that must be met before the object can run. To specify multiple conditions, add multiple precondition fields. The activity cannot run until all its conditions are met.

    Definition Classes
    AdpHiveActivityAdpActivity
  33. val retryDelay: Option[String]

    Permalink

    The timeout duration between two retry attempts.

    The timeout duration between two retry attempts. The default is 10 minutes.

    Definition Classes
    AdpHiveActivityAdpActivity
  34. val runsOn: Option[AdpRef[AdpEmrCluster]]

    Permalink

    The Amazon EMR cluster to run this activity.

    The Amazon EMR cluster to run this activity. EmrCluster object reference Yes

  35. val scriptUri: Option[String]

    Permalink

    The location of the Hive script to run.

    The location of the Hive script to run. For example, s3://script location.

  36. val scriptVariable: Option[Seq[String]]

    Permalink

    Specifies script variables for Amazon EMR to pass to Hive while running a script.

    Specifies script variables for Amazon EMR to pass to Hive while running a script. For example, the following example script variables would pass a SAMPLE and FILTER_DATE variable to Hive: SAMPLE=s3://elasticmapreduce/samples/hive-ads and FILTER_DATE=#{format(@scheduledStartTime,'YYYY-MM-dd')}% This field accepts multiple values and works with both script and scriptUri fields. In addition, scriptVariable functions regardless of stage set to true or false. This field is especially useful to send dynamic values to Hive using AWS Data Pipeline expressions and functions. For more information, see Pipeline Expressions and Functions.

  37. val stage: Option[String]

    Permalink

    Determines whether staging is enabled.

    Determines whether staging is enabled. Not permitted with Hive 11, so use an Amazon EMR AMI version 3.2.0 or greater.

  38. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  39. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  40. val type: String

    Permalink

    The type of object.

    The type of object. Use one of the predefined AWS Data Pipeline object types.

    Definition Classes
    AdpHiveActivityAdpDataPipelineObject
  41. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  42. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  43. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  44. val workerGroup: Option[String]

    Permalink

    The worker group.

    The worker group. This is used for routing tasks. If you provide a runsOn value and workerGroup exists, workerGroup is ignored.

    Definition Classes
    AdpHiveActivityAdpActivity

Inherited from AdpActivity

Inherited from AdpDataPipelineObject

Inherited from AdpObject

Inherited from AnyRef

Inherited from Any

Ungrouped