Apache HBase is a high-performance, open-source, distributed NoSQL database designed specifically for the Hadoop ecosystem, especially suitable for scenarios that require fast random read and write access to large datasets. This article will introduce the basics of HBase and use business scenarios of Users, Jobs, and their Subtasks to illustrate related concepts.

Basic Concepts of HBase #

Data Model Design

In HBase, the data model is defined through the following core components:

  • Table: A table in HBase is a logical collection of data, similar to a table in a relational database. For example, we can have a users table to store all user data.

  • Namespace: A namespace is a logical grouping of tables used to organize tables within a large HBase instance. In HBase, the default namespace is default.

  • Row Key: The Row Key is the unique identifier for each row of data in the table. Data in HBase is stored in dictionary order according to the Row Key. When designing the Row Key, it is necessary to consider the data access pattern to avoid data skew and improve query efficiency.

  • Column Family: A Column Family is a logical grouping of columns in the table. All columns belonging to the same Column Family share the same storage attributes, and their data is stored in the same HFile.

  • Column: A Column is a specific attribute under a Column Family, containing specific values. In HBase, a Column is identified by both the Column Family and the Column Qualifier (the specific column name).

  • Timestamp: Each cell in HBase has a timestamp that records the last time the data was inserted or updated. This allows HBase to perform version control and manage multiple versions of data.

Schema Design in Business Scenarios

Consider the following business scenario for HBase schema design:

Users Schema

  • Table: users
  • Namespace: default
  • Row Key: The unique identifier for the user, such as the user ID (user_123)
  • Column Family: user_details
    • Columns:
    • id: User ID
    • name: User's name
    • email: User's email
    • created_at: The time the user was created
  • Timestamp: Records the last time the user data was updated

Jobs Schema

  • Table: jobs
  • Namespace: default
  • Row Key: The job ID, for query efficiency, it can be combined with the user ID (user_job_123_456), where 123 is the user ID and 456 is the job ID
  • Column Family: job_details
    • Columns:
    • user_id: The associated user ID
    • description: Job description
    • created_at: The time the job was created

Tasks Schema

  • Table: tasks
  • Namespace: default
  • Row Key: The subtask ID, composite key design (user_job_task_123_456_789), where 123 is the user ID, 456 is the job ID, and 789 is the subtask ID
  • Column Family: task_info
    • Columns:
    • job_id: The associated job ID
    • details: Task details
    • status: Task status
    • completion_time: The time the task was completed

Basic Structure of HBase #

HBase is a distributed, column-oriented database that is based on Google's Bigtable model and runs on Hadoop's HDFS. The core components of HBase include:

  1. HMaster: Responsible for monitoring the cluster status and assigning Regions to RegionServers.
  2. RegionServer: Responsible for handling read and write requests for specific Regions.
  3. Region: A horizontal slice of a table, containing a certain range of rows.
  4. Store: The data storage within a Region for a Column Family.
  5. HFile: The underlying storage format of HBase, similar to HDFS in Hadoop.

HBase Design Practice Principles #

In-Depth Design of Row Key

The design of the Row Key is crucial to the performance of HBase, and here are some key points and examples:

  1. Avoid Monotonically Increasing: Using timestamps or auto-incrementing IDs can lead to write hotspots. For example, if you use a timestamp as the Row Key (e.g., 20230101_1500), it may result in all data at 3 PM being stored in the same Region.

  2. Use Salting: Distribute data by adding random prefixes (such as hash values) to avoid hotspots. For example, you can use the MD5 hash function on the user ID and include it as part of the Row Key (e.g., hash(user_id)).

  3. Hash Function: Use a hash to scatter the data to distribute it more evenly. For example, for URL data, you can use the hash value of the URL as part of the Row Key (e.g., hash(url)).

  4. Reverse Keys: Reverse the monotonically increasing part to change the data distribution. For example, for a serial number, you can reverse it so that the increase in the serial number maps to a decrease in the Row Key (e.g., reversing 0001 to 1000).

  5. Composite Row Key: In business scenarios, a composite Row Key can include multiple fields, which are usually key parts of the business logic. For example, in the jobs and tasks tables, we use a composite Row Key to maintain the logical relationship and query efficiency of the data.

Designing of Column Family

A Column Family is a logical grouping of columns in the table, and they are stored together in the same HFile:

  • Principle Rules: It is generally recommended to have no more than 3 Column Families to reduce the overhead of metadata storage and compression operations. The design of Column Families should be based on the data access pattern, grouping columns that are frequently accessed together in the same Column Family.

Example for Users

For the Users table, we might design as follows:

  • Column Family: user_details
    • Columns: id, name, email, created_at, updated_at

In this design, all information about the user is stored in one Column Family because these pieces of information are likely to be accessed together.

Importance of Data Locality

Data locality refers to the physical location of data in relation to the computational resources that process it. High data locality can reduce network transmission and improve performance:

  • Localization Strategy: Design data storage and processing strategies to improve data locality. For example, if most user requests come from a specific geographic location, you can store the data of these users in the data center in that region.

Data Cleanup Strategy

In HBase, lifecycle management of data is crucial to prevent unlimited data growth:

  • TTL (Time-to-Live): Set a lifespan for data to automatically clean up expired data. For example, you can set a short TTL for user session information, such as a few hours (TTL => 3600).
  • Version Control: Limit the number of versions for each cell to prevent the accumulation of old data. For example, for user status updates, you can set the number of versions to 5, so even with frequent updates, only the most recent 5 statuses will be retained (VERSIONS => 5).

Customization of Caching Strategy

HBase's caching mechanism is crucial for improving read performance:

  • Block Cache: Used to cache read data blocks. For frequently read data, such as user information, you can fully utilize the Block Cache.
  • MemStore: Used to cache recently written data. For write-intensive operations, such as user registration, you can optimize the configuration of MemStore.

Choice of Compression Strategy

Data compression can play a role in saving storage space and improving I/O performance:

  • Choose the Right Compression Algorithm: Such as Snappy or LZO, to balance the compression ratio and CPU overhead. For example, for text data, Snappy is a good choice because it provides a good compression ratio and faster compression/decompression speed.

In-Depth Parameter Tuning

HBase provides a large number of configuration parameters that can be adjusted according to the workload:

  • Read-Write Ratio: Adjust the size ratio of Block Cache and MemStore to adapt to read-write loads. For example, for read-intensive applications, you can increase the proportion of Block Cache.
  • Flush and Compaction: Adjust the trigger conditions for flushing and compaction to optimize write performance. For example, you can increase the threshold of the number of files to trigger compaction to reduce the frequency of compaction operations.

Monitoring and Performance Evaluation

Monitoring tools can help you track system performance and identify issues in a timely manner:

  • Use Grafana or Ambari: Monitor the operational status and performance metrics of HBase. For example, you can monitor indicators such as request latency and read-write throughput of Region Server.

Avoid Using JOIN

In NoSQL databases like HBase, JOIN operations are not supported, so different strategies need to be adopted to handle scenarios that may require JOIN:

  • Denormalization: Pre-copy the data that needs to be JOINed into the same table, or design a separate table to store the combined view of this data.

  • Pre-aggregation: Calculate and store aggregated data in advance when writing data to HBase, so that the required information can be directly obtained during the query without the need for a JOIN operation.

  • Application Layer JOIN: Move the JOIN operation to the application layer, that is, after obtaining the data from each table in the application logic, then merge it. This approach, although increasing the complexity of the application layer, avoids performing JOIN operations at the database level.

  • Use Views or Materialized Views: In some cases, you can use HBase's coprocessor framework to create views or materialized views, which can simulate the effect of JOIN operations.

Other Common Pitfalls

Avoid the following common errors when designing and using HBase:

  • Hotspot Issues: Incorrect Row Key design can lead to some Region Servers being overloaded. For example, if all write operations are targeted at the same Row Key, the Region Server handling that Key will bear a huge load.

  • Column Family Design: Avoid having too many Column Families, which can increase management overhead and may lead to performance degradation. For example, if a table has dozens of Column Families, and each Column Family contains only a few columns, this will result in a large number of file open and close operations, affecting performance.


HBase is a highly customizable NoSQL database suitable for processing large datasets. With careful design and continuous optimization, HBase can become a powerful tool for big data processing. The key to best practices lies in a deep understanding of the working principles of HBase and customization according to specific business needs. There is no one-size-fits-all solution, and continuous monitoring and adjustment are key to ensuring high performance of HBase.

Corrections and improvements are welcome.

References #

Categories: Code


Ideals are like the stars: we never reach them, but like the mariners of the sea, we chart our course by them.

Leave a Reply

Your email address will not be published. Required fields are marked *