12 Oct 2016
The Apache Flink community released the next bugfix version of the Apache Flink 1.1. series.
We recommend all users to upgrade to Flink 1.1.3.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.1.3</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.10</artifactId>
<version>1.1.3</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.10</artifactId>
<version>1.1.3</version>
</dependency>
You can find the binaries on the updated Downloads page.
Note for RocksDB Backend Users
It is highly recommended to use the “fully async” mode for the RocksDB state backend. The “fully async” mode will most likely allow you to easily upgrade to Flink 1.2 (via savepoints) when it is released. The “semi async” mode will no longer be supported by Flink 1.2.
RocksDBStateBackend backend = new RocksDBStateBackend("...");
backend.enableFullyAsyncSnapshots();
Release Notes - Flink - Version 1.1.3
Bug
- [FLINK-2662] - CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."
- [FLINK-4311] - TableInputFormat fails when reused on next split
- [FLINK-4329] - Fix Streaming File Source Timestamps/Watermarks Handling
- [FLINK-4485] - Finished jobs in yarn session fill /tmp filesystem
- [FLINK-4513] - Kafka connector documentation refers to Flink 1.1-SNAPSHOT
- [FLINK-4514] - ExpiredIteratorException in Kinesis Consumer on long catch-ups to head of stream
- [FLINK-4540] - Detached job execution may prevent cluster shutdown
- [FLINK-4544] - TaskManager metrics are vulnerable to custom JMX bean installation
- [FLINK-4566] - ProducerFailedException does not properly preserve Exception causes
- [FLINK-4588] - Fix Merging of Covering Window in MergingWindowSet
- [FLINK-4589] - Fix Merging of Covering Window in MergingWindowSet
- [FLINK-4616] - Kafka consumer doesn't store last emmited watermarks per partition in state
- [FLINK-4618] - FlinkKafkaConsumer09 should start from the next record on startup from offsets in Kafka
- [FLINK-4619] - JobManager does not answer to client when restore from savepoint fails
- [FLINK-4636] - AbstractCEPPatternOperator fails to restore state
- [FLINK-4640] - Serialization of the initialValue of a Fold on WindowedStream fails
- [FLINK-4651] - Re-register processing time timers at the WindowOperator upon recovery.
- [FLINK-4663] - Flink JDBCOutputFormat logs wrong WARN message
- [FLINK-4672] - TaskManager accidentally decorates Kill messages
- [FLINK-4677] - Jars with no job executions produces NullPointerException in ClusterClient
- [FLINK-4702] - Kafka consumer must commit offsets asynchronously
- [FLINK-4727] - Kafka 0.9 Consumer should also checkpoint auto retrieved offsets even when no data is read
- [FLINK-4732] - Maven junction plugin security threat
- [FLINK-4777] - ContinuousFileMonitoringFunction may throw IOException when files are moved
- [FLINK-4788] - State backend class cannot be loaded, because fully qualified name converted to lower-case
Improvement
- [FLINK-4396] - GraphiteReporter class not found at startup of jobmanager
- [FLINK-4574] - Strengthen fetch interval implementation in Kinesis consumer
- [FLINK-4723] - Unify behaviour of committed offsets to Kafka / ZK for Kafka 0.8 and 0.9 consumer