@Internal public final class StreamTableEnvironmentImpl extends org.apache.flink.table.api.internal.TableEnvironmentImpl implements StreamTableEnvironment
StreamTableEnvironment. This enables conversions from/to
DataStream.
It binds to a given StreamExecutionEnvironment.
| Constructor and Description |
|---|
StreamTableEnvironmentImpl(org.apache.flink.table.catalog.CatalogManager catalogManager,
org.apache.flink.table.module.ModuleManager moduleManager,
org.apache.flink.table.catalog.FunctionCatalog functionCatalog,
org.apache.flink.table.api.TableConfig tableConfig,
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.delegation.Planner planner,
org.apache.flink.table.delegation.Executor executor,
boolean isStreamingMode,
ClassLoader userClassLoader) |
| Modifier and Type | Method and Description |
|---|---|
org.apache.flink.table.descriptors.StreamTableDescriptor |
connect(org.apache.flink.table.descriptors.ConnectorDescriptor connectorDescriptor)
Creates a table source and/or table sink from a descriptor.
|
static StreamTableEnvironment |
create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.api.EnvironmentSettings settings,
org.apache.flink.table.api.TableConfig tableConfig) |
<T> void |
createTemporaryView(String path,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Creates a view from the given
DataStream in a given path. |
<T> void |
createTemporaryView(String path,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
org.apache.flink.table.expressions.Expression... fields)
Creates a view from the given
DataStream in a given path with specified field names. |
<T> void |
createTemporaryView(String path,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Creates a view from the given
DataStream in a given path with specified field names. |
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment |
execEnv()
This is a temporary workaround for Python API.
|
<T> org.apache.flink.table.api.Table |
fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Converts the given
DataStream into a Table. |
<T> org.apache.flink.table.api.Table |
fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
org.apache.flink.table.expressions.Expression... fields)
Converts the given
DataStream into a Table with specified field names. |
<T> org.apache.flink.table.api.Table |
fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Converts the given
DataStream into a Table with specified field names. |
org.apache.flink.api.dag.Pipeline |
getPipeline(String jobName)
This method is used for sql client to submit job.
|
protected org.apache.flink.table.operations.QueryOperation |
qualifyQueryOperation(org.apache.flink.table.catalog.ObjectIdentifier identifier,
org.apache.flink.table.operations.QueryOperation queryOperation) |
<T> void |
registerDataStream(String name,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Creates a view from the given
DataStream. |
<T> void |
registerDataStream(String name,
org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
Creates a view from the given
DataStream in a given path with specified field names. |
<T,ACC> void |
registerFunction(String name,
org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)
Registers an
AggregateFunction under a unique name in the TableEnvironment's catalog. |
<T,ACC> void |
registerFunction(String name,
org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)
Registers an
TableAggregateFunction under a unique name in the TableEnvironment's
catalog. |
<T> void |
registerFunction(String name,
org.apache.flink.table.functions.TableFunction<T> tableFunction)
Registers a
TableFunction under a unique name in the TableEnvironment's catalog. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<T> |
toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Converts the given
Table into an append DataStream of a specified type. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
Converts the given
Table into a DataStream of add and retract messages. |
<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> |
toRetractStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Converts the given
Table into a DataStream of add and retract messages. |
protected void |
validateTableSource(org.apache.flink.table.sources.TableSource<?> tableSource) |
create, createFunction, createFunction, createStatementSet, createTable, createTemporaryFunction, createTemporaryFunction, createTemporarySystemFunction, createTemporarySystemFunction, createTemporaryView, dropFunction, dropTemporaryFunction, dropTemporarySystemFunction, dropTemporaryTable, dropTemporaryView, execute, executeInternal, executeInternal, executeSql, explain, explain, explain, explainInternal, explainSql, from, fromTableSource, fromValues, fromValues, fromValues, fromValues, fromValues, fromValues, getCatalog, getCatalogManager, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, getExplainDetails, getParser, getPlanner, insertInto, insertInto, listCatalogs, listDatabases, listFunctions, listModules, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, listViews, loadModule, registerCatalog, registerFunction, registerTable, registerTableSink, registerTableSink, registerTableSinkInternal, registerTableSource, registerTableSourceInternal, scan, sqlQuery, sqlUpdate, translateAndClearBuffer, unloadModule, useCatalog, useDatabaseclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitcreate, create, create, executecreate, createFunction, createFunction, createStatementSet, createTemporaryFunction, createTemporaryFunction, createTemporarySystemFunction, createTemporarySystemFunction, createTemporaryView, dropFunction, dropTemporaryFunction, dropTemporarySystemFunction, dropTemporaryTable, dropTemporaryView, executeSql, explain, explain, explain, explainSql, from, fromTableSource, fromValues, fromValues, fromValues, fromValues, fromValues, fromValues, getCatalog, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, insertInto, insertInto, listCatalogs, listDatabases, listFunctions, listModules, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, listViews, loadModule, registerCatalog, registerFunction, registerTable, registerTableSink, registerTableSink, registerTableSource, scan, sqlQuery, sqlUpdate, unloadModule, useCatalog, useDatabasepublic StreamTableEnvironmentImpl(org.apache.flink.table.catalog.CatalogManager catalogManager,
org.apache.flink.table.module.ModuleManager moduleManager,
org.apache.flink.table.catalog.FunctionCatalog functionCatalog,
org.apache.flink.table.api.TableConfig tableConfig,
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment,
org.apache.flink.table.delegation.Planner planner,
org.apache.flink.table.delegation.Executor executor,
boolean isStreamingMode,
ClassLoader userClassLoader)
public static StreamTableEnvironment create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment, org.apache.flink.table.api.EnvironmentSettings settings, org.apache.flink.table.api.TableConfig tableConfig)
public <T> void registerFunction(String name, org.apache.flink.table.functions.TableFunction<T> tableFunction)
StreamTableEnvironmentTableFunction under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.registerFunction in interface StreamTableEnvironmentT - The type of the output row.name - The name under which the function is registered.tableFunction - The TableFunction to register.public <T,ACC> void registerFunction(String name, org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)
StreamTableEnvironmentAggregateFunction under a unique name in the TableEnvironment's catalog.
Registered functions can be referenced in Table API and SQL queries.registerFunction in interface StreamTableEnvironmentT - The type of the output value.ACC - The type of aggregate accumulator.name - The name under which the function is registered.aggregateFunction - The AggregateFunction to register.public <T,ACC> void registerFunction(String name, org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)
StreamTableEnvironmentTableAggregateFunction under a unique name in the TableEnvironment's
catalog. Registered functions can only be referenced in Table API.registerFunction in interface StreamTableEnvironmentT - The type of the output value.ACC - The type of aggregate accumulator.name - The name under which the function is registered.tableAggregateFunction - The TableAggregateFunction to register.public <T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
StreamTableEnvironmentDataStream into a Table.
The field names of the Table are automatically derived from the type of the
DataStream.fromDataStream in interface StreamTableEnvironmentT - The type of the DataStream.dataStream - The DataStream to be converted.Table.public <T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
String fields)
StreamTableEnvironmentDataStream into a Table with specified field names.
There are two modes for mapping original fields to the fields of the Table:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// reorder the fields, rename the original 'f0' field to 'name' and add event-time
// attribute named 'rowtime'
Table table = tableEnv.fromDataStream(stream, "f1, rowtime.rowtime, f0 as 'name'");
2. Reference input fields by position:
In this mode, fields are simply renamed. Event-time attributes can
replace the field on their position in the input data (if it is of correct type) or be
appended at the end. Proctime attributes must be appended at the end. This mode can only be
used if the input type has a defined field order (tuple, case class, Row) and none of
the fields references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// rename the original fields to 'a' and 'b' and extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
Table table = tableEnv.fromDataStream(stream, "a, b, rowtime.rowtime");
fromDataStream in interface StreamTableEnvironmentT - The type of the DataStream.dataStream - The DataStream to be converted.fields - The fields expressions to map original fields of the DataStream to the fields of the Table.Table.public <T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream,
org.apache.flink.table.expressions.Expression... fields)
StreamTableEnvironmentDataStream into a Table with specified field names.
There are two modes for mapping original fields to the fields of the Table:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
Table table = tableEnv.fromDataStream(
stream,
$("f1"), // reorder and use the original field
$("rowtime").rowtime(), // extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
$("f0").as("name") // reorder and give the original field a better name
);
2. Reference input fields by position:
In this mode, fields are simply renamed. Event-time attributes can
replace the field on their position in the input data (if it is of correct type) or be
appended at the end. Proctime attributes must be appended at the end. This mode can only be
used if the input type has a defined field order (tuple, case class, Row) and none of
the fields references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
Table table = tableEnv.fromDataStream(
stream,
$("a"), // rename the first field to 'a'
$("b"), // rename the second field to 'b'
$("rowtime").rowtime() // extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
);
fromDataStream in interface StreamTableEnvironmentT - The type of the DataStream.dataStream - The DataStream to be converted.fields - The fields expressions to map original fields of the DataStream to the fields of the Table.Table.public <T> void registerDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
StreamTableEnvironmentDataStream.
Registered views can be referenced in SQL queries.
The field names of the Table are automatically derived
from the type of the DataStream.
The view is registered in the namespace of the current catalog and database. To register the view in
a different catalog use StreamTableEnvironment.createTemporaryView(String, DataStream).
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
registerDataStream in interface StreamTableEnvironmentT - The type of the DataStream to register.name - The name under which the DataStream is registered in the catalog.dataStream - The DataStream to register.public <T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
StreamTableEnvironmentDataStream in a given path.
Registered views can be referenced in SQL queries.
The field names of the Table are automatically derived
from the type of the DataStream.
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView in interface StreamTableEnvironmentT - The type of the DataStream.path - The path under which the DataStream is created.
See also the TableEnvironment class description for the format of the path.dataStream - The DataStream out of which to create the view.public <T> void registerDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, String fields)
StreamTableEnvironmentDataStream in a given path with specified field names.
Registered views can be referenced in SQL queries.
There are two modes for mapping original fields to the fields of the View:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// reorder the fields, rename the original 'f0' field to 'name' and add event-time
// attribute named 'rowtime'
tableEnv.registerDataStream("myTable", stream, "f1, rowtime.rowtime, f0 as 'name'");
2. Reference input fields by position:
In this mode, fields are simply renamed. Event-time attributes can
replace the field on their position in the input data (if it is of correct type) or be
appended at the end. Proctime attributes must be appended at the end. This mode can only be
used if the input type has a defined field order (tuple, case class, Row) and none of
the fields references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// rename the original fields to 'a' and 'b' and extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
tableEnv.registerDataStream("myTable", stream, "a, b, rowtime.rowtime");
The view is registered in the namespace of the current catalog and database. To register the view in
a different catalog use StreamTableEnvironment.createTemporaryView(String, DataStream).
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
registerDataStream in interface StreamTableEnvironmentT - The type of the DataStream to register.name - The name under which the DataStream is registered in the catalog.dataStream - The DataStream to register.fields - The fields expressions to map original fields of the DataStream to the fields of the View.public <T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, String fields)
StreamTableEnvironmentDataStream in a given path with specified field names.
Registered views can be referenced in SQL queries.
There are two modes for mapping original fields to the fields of the View:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// reorder the fields, rename the original 'f0' field to 'name' and add event-time
// attribute named 'rowtime'
tableEnv.createTemporaryView("cat.db.myTable", stream, "f1, rowtime.rowtime, f0 as 'name'");
2. Reference input fields by position:
In this mode, fields are simply renamed. Event-time attributes can
replace the field on their position in the input data (if it is of correct type) or be
appended at the end. Proctime attributes must be appended at the end. This mode can only be
used if the input type has a defined field order (tuple, case class, Row) and none of
the fields references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
// rename the original fields to 'a' and 'b' and extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
tableEnv.createTemporaryView("cat.db.myTable", stream, "a, b, rowtime.rowtime");
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView in interface StreamTableEnvironmentT - The type of the DataStream.path - The path under which the DataStream is created.
See also the TableEnvironment class description for the format of the path.dataStream - The DataStream out of which to create the view.fields - The fields expressions to map original fields of the DataStream to the fields of the View.public <T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.expressions.Expression... fields)
StreamTableEnvironmentDataStream in a given path with specified field names.
Registered views can be referenced in SQL queries.
There are two modes for mapping original fields to the fields of the View:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.createTemporaryView(
"cat.db.myTable",
stream,
$("f1"), // reorder and use the original field
$("rowtime").rowtime(), // extract the internally attached timestamp into an event-time
// attribute named 'rowtime'
$("f0").as("name") // reorder and give the original field a better name
);
2. Reference input fields by position:
In this mode, fields are simply renamed. Event-time attributes can
replace the field on their position in the input data (if it is of correct type) or be
appended at the end. Proctime attributes must be appended at the end. This mode can only be
used if the input type has a defined field order (tuple, case class, Row) and none of
the fields references a field of the input type.
Example:
DataStream<Tuple2<String, Long>> stream = ...
tableEnv.createTemporaryView(
"cat.db.myTable",
stream,
$("a"), // rename the first field to 'a'
$("b"), // rename the second field to 'b'
$("rowtime").rowtime() // adds an event-time attribute named 'rowtime'
);
Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
createTemporaryView in interface StreamTableEnvironmentT - The type of the DataStream.path - The path under which the DataStream is created.
See also the TableEnvironment class description for the format of the path.dataStream - The DataStream out of which to create the view.fields - The fields expressions to map original fields of the DataStream to the fields of the View.protected org.apache.flink.table.operations.QueryOperation qualifyQueryOperation(org.apache.flink.table.catalog.ObjectIdentifier identifier,
org.apache.flink.table.operations.QueryOperation queryOperation)
qualifyQueryOperation in class org.apache.flink.table.api.internal.TableEnvironmentImplpublic <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
Class<T> clazz)
StreamTableEnvironmentTable into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toAppendStream in interface StreamTableEnvironmentT - The type of the resulting DataStream.table - The Table to convert.clazz - The class of the type of the resulting DataStream.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table,
org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
StreamTableEnvironmentTable into an append DataStream of a specified type.
The Table must only have insert (append) changes. If the Table is also modified
by update or delete changes, the conversion will fail.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toAppendStream in interface StreamTableEnvironmentT - The type of the resulting DataStream.table - The Table to convert.typeInfo - The TypeInformation that specifies the type of the DataStream.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, Class<T> clazz)
StreamTableEnvironmentTable into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toRetractStream in interface StreamTableEnvironmentT - The type of the requested record type.table - The Table to convert.clazz - The class of the requested record type.DataStream.public <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
StreamTableEnvironmentTable into a DataStream of add and retract messages.
The message will be encoded as Tuple2. The first field is a Boolean flag,
the second field holds the record of the specified type T.
A true Boolean flag indicates an add message, a false flag indicates a retract message.
The fields of the Table are mapped to DataStream fields as follows:
Row and Tuple
types: Fields are mapped by position, field types must match.DataStream types: Fields are mapped by field name, field types must match.toRetractStream in interface StreamTableEnvironmentT - The type of the requested record type.table - The Table to convert.typeInfo - The TypeInformation of the requested record type.DataStream.public org.apache.flink.table.descriptors.StreamTableDescriptor connect(org.apache.flink.table.descriptors.ConnectorDescriptor connectorDescriptor)
StreamTableEnvironmentDescriptors allow for declaring the communication to external systems in an implementation-agnostic way. The classpath is scanned for suitable table factories that match the desired configuration.
The following example shows how to read from a Kafka connector using a JSON format and registering a table source "MyTable" in append mode:
tableEnv
.connect(
new Kafka()
.version("0.11")
.topic("clicks")
.property("group.id", "click-group")
.startFromEarliest())
.withFormat(
new Json()
.jsonSchema("{...}")
.failOnMissingField(false))
.withSchema(
new Schema()
.field("user-name", "VARCHAR").from("u_name")
.field("count", "DECIMAL")
.field("proc-time", "TIMESTAMP").proctime())
.inAppendMode()
.createTemporaryTable("MyTable")
connect in interface StreamTableEnvironmentconnect in interface org.apache.flink.table.api.TableEnvironmentconnect in class org.apache.flink.table.api.internal.TableEnvironmentImplconnectorDescriptor - connector descriptor describing the external system@Internal public org.apache.flink.streaming.api.environment.StreamExecutionEnvironment execEnv()
public org.apache.flink.api.dag.Pipeline getPipeline(String jobName)
protected void validateTableSource(org.apache.flink.table.sources.TableSource<?> tableSource)
validateTableSource in class org.apache.flink.table.api.internal.TableEnvironmentImplCopyright © 2014–2020 The Apache Software Foundation. All rights reserved.