org.apache.hadoop.hbase.regionserver.wal
Class HLogKey
java.lang.Object
org.apache.hadoop.hbase.regionserver.wal.HLogKey
- All Implemented Interfaces:
- Comparable<HLogKey>, org.apache.hadoop.io.Writable, org.apache.hadoop.io.WritableComparable<HLogKey>
@InterfaceAudience.LimitedPrivate(value="Replication")
public class HLogKey
- extends Object
- implements org.apache.hadoop.io.WritableComparable<HLogKey>
A Key for an entry in the change log.
The log intermingles edits to many tables and rows, so each log entry
identifies the appropriate table and row. Within a table and row, they're
also sorted.
Some Transactional edits (START, COMMIT, ABORT) will not have an
associated row.
|
Field Summary |
static org.apache.commons.logging.Log |
LOG
|
|
Constructor Summary |
HLogKey()
|
HLogKey(byte[] encodedRegionName,
TableName tablename,
long logSeqNum,
long now,
List<UUID> clusterIds,
long nonceGroup,
long nonce)
Create the log key for writing to somewhere. |
HLogKey(byte[] encodedRegionName,
TableName tablename,
long logSeqNum,
long now,
UUID clusterId)
|
|
Method Summary |
void |
addClusterId(UUID clusterId)
Marks that the cluster with the given clusterId has consumed the change |
int |
compareTo(HLogKey o)
|
boolean |
equals(Object obj)
|
org.apache.hadoop.hbase.protobuf.generated.WALProtos.WALKey.Builder |
getBuilder(WALCellCodec.ByteStringCompressor compressor)
|
List<UUID> |
getClusterIds()
|
byte[] |
getEncodedRegionName()
|
long |
getLogSeqNum()
|
long |
getNonce()
|
long |
getNonceGroup()
|
UUID |
getOriginatingClusterId()
|
NavigableMap<byte[],Integer> |
getScopes()
|
TableName |
getTablename()
|
long |
getWriteTime()
|
int |
hashCode()
|
protected void |
init(byte[] encodedRegionName,
TableName tablename,
long logSeqNum,
long now,
List<UUID> clusterIds,
long nonceGroup,
long nonce)
|
void |
readFields(DataInput in)
|
void |
readFieldsFromPb(org.apache.hadoop.hbase.protobuf.generated.WALProtos.WALKey walKey,
WALCellCodec.ByteStringUncompressor uncompressor)
|
void |
readOlderScopes(NavigableMap<byte[],Integer> scopes)
|
void |
setCompressionContext(org.apache.hadoop.hbase.regionserver.wal.CompressionContext compressionContext)
|
void |
setScopes(NavigableMap<byte[],Integer> scopes)
|
String |
toString()
|
Map<String,Object> |
toStringMap()
Produces a string map for this key. |
void |
write(DataOutput out)
Deprecated. |
LOG
public static final org.apache.commons.logging.Log LOG
HLogKey
public HLogKey()
HLogKey
public HLogKey(byte[] encodedRegionName,
TableName tablename,
long logSeqNum,
long now,
UUID clusterId)
HLogKey
public HLogKey(byte[] encodedRegionName,
TableName tablename,
long logSeqNum,
long now,
List<UUID> clusterIds,
long nonceGroup,
long nonce)
- Create the log key for writing to somewhere.
We maintain the tablename mainly for debugging purposes.
A regionName is always a sub-table object.
- Parameters:
encodedRegionName - Encoded name of the region as returned by
HRegionInfo#getEncodedNameAsBytes().tablename - - name of tablelogSeqNum - - log sequence numbernow - Time at which this edit was written.clusterIds - the clusters that have consumed the change(used in Replication)
init
protected void init(byte[] encodedRegionName,
TableName tablename,
long logSeqNum,
long now,
List<UUID> clusterIds,
long nonceGroup,
long nonce)
setCompressionContext
public void setCompressionContext(org.apache.hadoop.hbase.regionserver.wal.CompressionContext compressionContext)
- Parameters:
compressionContext - Compression context to use
getEncodedRegionName
public byte[] getEncodedRegionName()
- Returns:
- encoded region name
getTablename
public TableName getTablename()
- Returns:
- table name
getLogSeqNum
public long getLogSeqNum()
- Returns:
- log sequence number
getWriteTime
public long getWriteTime()
- Returns:
- the write time
getScopes
public NavigableMap<byte[],Integer> getScopes()
getNonceGroup
public long getNonceGroup()
- Returns:
- The nonce group
getNonce
public long getNonce()
- Returns:
- The nonce
setScopes
public void setScopes(NavigableMap<byte[],Integer> scopes)
readOlderScopes
public void readOlderScopes(NavigableMap<byte[],Integer> scopes)
addClusterId
public void addClusterId(UUID clusterId)
- Marks that the cluster with the given clusterId has consumed the change
getClusterIds
public List<UUID> getClusterIds()
- Returns:
- the set of cluster Ids that have consumed the change
getOriginatingClusterId
public UUID getOriginatingClusterId()
- Returns:
- the cluster id on which the change has originated. It there is no such cluster, it
returns DEFAULT_CLUSTER_ID (cases where replication is not enabled)
toString
public String toString()
- Overrides:
toString in class Object
toStringMap
public Map<String,Object> toStringMap()
- Produces a string map for this key. Useful for programmatic use and
manipulation of the data stored in an HLogKey, for example, printing
as JSON.
- Returns:
- a Map containing data from this key
equals
public boolean equals(Object obj)
- Overrides:
equals in class Object
hashCode
public int hashCode()
- Overrides:
hashCode in class Object
compareTo
public int compareTo(HLogKey o)
- Specified by:
compareTo in interface Comparable<HLogKey>
write
@Deprecated
public void write(DataOutput out)
throws IOException
- Deprecated.
- Specified by:
write in interface org.apache.hadoop.io.Writable
- Throws:
IOException
readFields
public void readFields(DataInput in)
throws IOException
- Specified by:
readFields in interface org.apache.hadoop.io.Writable
- Throws:
IOException
getBuilder
public org.apache.hadoop.hbase.protobuf.generated.WALProtos.WALKey.Builder getBuilder(WALCellCodec.ByteStringCompressor compressor)
throws IOException
- Throws:
IOException
readFieldsFromPb
public void readFieldsFromPb(org.apache.hadoop.hbase.protobuf.generated.WALProtos.WALKey walKey,
WALCellCodec.ByteStringUncompressor uncompressor)
throws IOException
- Throws:
IOException
Copyright © 2007-2015 The Apache Software Foundation. All Rights Reserved.