public class TableStoreRecordReader extends Object implements org.apache.hadoop.mapred.RecordReader<Void,RowDataContainer>
RecordReader for table store. Reads KeyValues from data files and picks out
RowData for Hive to consume.
NOTE: To support projection push down, when selectedColumns does not match columnNames this reader will still produce records of the original schema. However, columns not
in selectedColumns will be null.
| 构造器和说明 |
|---|
TableStoreRecordReader(org.apache.flink.table.store.table.source.TableRead read,
TableStoreInputSplit split,
List<String> columnNames,
List<String> selectedColumns) |
public TableStoreRecordReader(org.apache.flink.table.store.table.source.TableRead read,
TableStoreInputSplit split,
List<String> columnNames,
List<String> selectedColumns)
throws IOException
IOExceptionpublic boolean next(Void key, RowDataContainer value) throws IOException
next 在接口中 org.apache.hadoop.mapred.RecordReader<Void,RowDataContainer>IOExceptionpublic Void createKey()
createKey 在接口中 org.apache.hadoop.mapred.RecordReader<Void,RowDataContainer>public RowDataContainer createValue()
createValue 在接口中 org.apache.hadoop.mapred.RecordReader<Void,RowDataContainer>public long getPos()
throws IOException
getPos 在接口中 org.apache.hadoop.mapred.RecordReader<Void,RowDataContainer>IOExceptionpublic void close()
throws IOException
close 在接口中 org.apache.hadoop.mapred.RecordReader<Void,RowDataContainer>IOExceptionpublic float getProgress()
throws IOException
getProgress 在接口中 org.apache.hadoop.mapred.RecordReader<Void,RowDataContainer>IOExceptionCopyright © 2019–2022 The Apache Software Foundation. All rights reserved.