diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 9494814b5..142fe571c 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -72,7 +72,7 @@ cd connector sbt assembly ``` -Running this will run all unit tests and build the jar to target/scala-2.12/spark-vertica-connector-assembly-3.3.0.jar +Running this will run all unit tests and build the jar to target/scala-2.12/spark-vertica-connector-assembly-3.3.1.jar ## Step 4: Set up an environment The easiest way to set up an environment is to spin up the docker containers for a sandbox client environment and single-node clusters for both Vertica and HDFS following [this guide.](https://github.com/vertica/spark-connector/blob/main/examples/README.md) @@ -88,7 +88,7 @@ The next requirement is a spark application that uses the connector jar. Example ```shell cd examples/basic-read mkdir lib -cp ../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.3.0.jar lib +cp ../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.3.1.jar lib sbt run ``` diff --git a/examples/kerberos-example/README.md b/examples/kerberos-example/README.md index 510bda91f..35e39a144 100644 --- a/examples/kerberos-example/README.md +++ b/examples/kerberos-example/README.md @@ -20,7 +20,7 @@ sbt assembly Then create a `lib` folder at `/kerberos-example` and put the spark connector that you assembled inside. ``` mkdir /spark-connector/examples/kerberos-example/lib -cp /spark-connector/connector/target/scala-2.12/spark-vertica-connector-assembly-3.3.0.jar /spark-connector/examples/kerberos-example/lib +cp /spark-connector/connector/target/scala-2.12/spark-vertica-connector-assembly-3.3.1.jar /spark-connector/examples/kerberos-example/lib ``` Then in the example's `build.sbt`, comment out the vertica-spark connector dependency. diff --git a/examples/sparklyr/run.r b/examples/sparklyr/run.r index ae3426761..b4c4d2259 100644 --- a/examples/sparklyr/run.r +++ b/examples/sparklyr/run.r @@ -6,7 +6,7 @@ if (file.exists("done")) unlink("done") # Create the Spark config and give access to our connector jar file config <- spark_config() -config$sparklyr.jars.default <- "../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.3.0.jar" +config$sparklyr.jars.default <- "../../connector/target/scala-2.12/spark-vertica-connector-assembly-3.3.1.jar" # Submit a new Spark job that executes sparkapp.r with Spark version 3.1 spark_submit(master = "spark://localhost:7077", version = "3.1", file = "sparkapp.r", config = config) diff --git a/functional-tests/README.md b/functional-tests/README.md index 810f0bd6b..c4d9e93c8 100644 --- a/functional-tests/README.md +++ b/functional-tests/README.md @@ -8,7 +8,7 @@ Configuration is specified with application.conf (HOCON format) From the functional-tests directory, run the following commands: ``` mkdir lib -cd ../connector && sbt assembly && cp target/scala-2.12/spark-vertica-connector-assembly-3.3.0.jar ../functional-tests/lib && cd ../functional-tests +cd ../connector && sbt assembly && cp target/scala-2.12/spark-vertica-connector-assembly-3.3.1.jar ../functional-tests/lib && cd ../functional-tests ``` This will create a lib folder and then build and copy the connector JAR file to it. diff --git a/version.properties b/version.properties index 8fb28c370..9bde1b9e5 100644 --- a/version.properties +++ b/version.properties @@ -1,2 +1,2 @@ -connector-version=3.3.0 +connector-version=3.3.1