| Interface and Description |
|---|
| org.apache.flink.table.sinks.BatchTableSink
use
OutputFormatTableSink instead. |
| org.apache.flink.table.sources.BatchTableSource
use
InputFormatTableSource instead. |
| Class and Description |
|---|
| org.apache.flink.table.descriptors.OldCsv
Use the RFC-compliant
Csv format in the dedicated
flink-formats/flink-csv module instead when writing to Kafka. |
| org.apache.flink.table.descriptors.OldCsvValidator
Use the RFC-compliant
Csv format in the dedicated
flink-formats/flink-csv module instead. |
| Method and Description |
|---|
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.connect(ConnectorDescriptor)
The SQL
CREATE TABLE DDL is richer than this part of the API. This method
might be refactored in the next versions. Please use executeSql(ddl)
to register a table instead. |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.connect(ConnectorDescriptor)
The SQL
CREATE TABLE DDL is richer than this part of the API. This method
might be refactored in the next versions. Please use executeSql(ddl)
to register a table instead. |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.create(StreamExecutionEnvironment, TableConfig)
Use
StreamTableEnvironment.create(StreamExecutionEnvironment) and TableEnvironment.getConfig()
for manipulating TableConfig. |
org.apache.flink.table.factories.StreamTableSinkFactory.createStreamTableSink(Map<String, String>)
Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
org.apache.flink.table.factories.StreamTableSourceFactory.createStreamTableSource(Map<String, String>)
Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.createTemporaryView(String, DataSet<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.createTemporaryView(String, DataStream<T>, String) |
| org.apache.flink.table.descriptors.OldCsv.deriveSchema()
Derivation format schema from table's schema is the default behavior now.
So there is no need to explicitly declare to derive schema.
|
| org.apache.flink.table.descriptors.SchemaValidator.deriveTableSinkSchema(DescriptorProperties)
This method combines two separate concepts of table schema and field mapping.
This should be split into two methods once we have support for
the corresponding interfaces (see FLINK-9870).
|
org.apache.flink.table.descriptors.OldCsv.field(String, DataType)
OldCsv supports derive schema from table schema by default,
it is no longer necessary to explicitly declare the format schema.
This method will be removed in the future. |
org.apache.flink.table.descriptors.OldCsv.field(String, String)
OldCsv supports derive schema from table schema by default,
it is no longer necessary to explicitly declare the format schema.
This method will be removed in the future. |
| org.apache.flink.table.sources.CsvTableSource.Builder.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type system. It
is recommended to use
CsvTableSource.Builder.field(String, DataType) instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new
type system consistently to avoid unintended behavior. See the website documentation
for more information. |
org.apache.flink.table.descriptors.OldCsv.field(String, TypeInformation<?>)
OldCsv supports derive schema from table schema by default,
it is no longer necessary to explicitly declare the format schema.
This method will be removed in the future. |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.fromDataSet(DataSet<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.fromDataStream(DataStream<T>, String) |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.registerDataSet(String, DataSet<T>) |
| org.apache.flink.table.api.bridge.java.BatchTableEnvironment.registerDataSet(String, DataSet<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerDataStream(String, DataStream<T>) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerDataStream(String, DataStream<T>, String) |
| org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerFunction(String, TableFunction<T>)
Use
TableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction) instead. Please
note that the new method also uses the new type system and reflective extraction logic. It
might be necessary to update the function implementation as well. See the documentation of
TableFunction for more information on the new function design. |
org.apache.flink.table.descriptors.OldCsv.schema(TableSchema)
OldCsv supports derive schema from table schema by default,
it is no longer necessary to explicitly declare the format schema.
This method will be removed in the future. |
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.