Skip to content

Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines.

License

Notifications You must be signed in to change notification settings

akedin/DataflowJavaSDK

 
 

Repository files navigation

Google Cloud Dataflow SDK for Java

Google Cloud Dataflow provides a simple, powerful programming model for building both batch and streaming parallel data processing pipelines.

Dataflow SDK for Java is a distribution of a portion of the Apache Beam project. This repository hosts the code to build this distribution and any Dataflow-specific code/modules. The underlying source code is hosted in the Apache Beam repository.

General usage of Google Cloud Dataflow does not require use of this repository. Instead, you can do any one of the following:

  1. Depend directly on a specific version of the SDK in the Maven Central Repository by adding the following dependency to development environments like Eclipse or Apache Maven:

     <dependency>
       <groupId>com.google.cloud.dataflow</groupId>
       <artifactId>google-cloud-dataflow-java-sdk-all</artifactId>
       <version>version_number</version>
     </dependency>
    
  2. Download the example pipelines from the separate DataflowJavaSDK-examples repository.

  3. If you are using Eclipse integrated development environment (IDE), the Cloud Dataflow Plugin for Eclipse provides tools to create and execute Dataflow pipelines inside Eclipse.

Status Build Status

Both the SDK and the Dataflow Service are generally available and considered stable and fully qualified for production use.

This master branch contains code to build Dataflow SDK 2.0.0 and newer, as a distribution of Apache Beam. Pre-Beam SDKs, versions 1.x, are maintained in the master-1.x branch.

Overview

The key concepts in this programming model are:

  • PCollection: represents a collection of data, which could be bounded or unbounded in size.
  • PTransform: represents a computation that transforms input PCollections into output PCollections.
  • Pipeline: manages a directed acyclic graph of PTransforms and PCollections that is ready for execution.
  • PipelineRunner: specifies where and how the pipeline should execute.

We provide two runners:

  1. The DirectRunner runs the pipeline on your local machine.
  2. The DataflowRunner submits the pipeline to the Cloud Dataflow Service, where it runs using managed resources in the Google Cloud Platform.

The SDK is built to be extensible and support additional execution environments beyond local execution and the Google Cloud Dataflow Service. Apache Beam contains additional SDKs, runners, and IO connectors.

Getting Started

Please try our Quickstarts.

Contact Us

We welcome all usage-related questions on Stack Overflow tagged with google-cloud-dataflow.

Please use issue tracker on GitHub to report any bugs, comments or questions regarding SDK development.

More Information

Apache, Apache Beam and the orange letter B logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

About

Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 100.0%