Why Choose SMTX ZBS?
Reliable and Stable. The core of SMTX ZBS is developed independently by SmartX. It has been tested in production environments of the financial industry over a long period. Stability is guaranteed.
Excellent Performance. The full stack performance optimization brings high performance and low latency. The competitive performance has been verified by actual customer business.
Flexible and Open. SMTX ZBS can be delivered as standalone software that supports mainstream server hardware.
Simple and Agile. You can invest on-demand and expand online based on your business needs, starting on a small scale.
Product Deployment Architecture
© SmartX
Overview
Distributed Architecture. Eliminates the bottleneck of traditional controller architecture, fully makes use of the performance of new storage media, and improves system concurrency and elastic expansion capabilities.
Software Defined. Fully decoupled from server hardware, not only provides users with more flexible hardware choices, quick integration of the latest hardware technologies to improve the overall system capabilities, but also quickly adapts to new CPU architectures such as ARM to meet users' needs.
Storage Resource Pooling. Integrates system storage resources, achieves unified planning / expansion and use-on-demand, improves resource utilization, and reduces operation & maintenance costs.
Multi-Compute Platform Support. Supports compute platforms such as vSphere, OpenStack, Kubernetes, and physical appliances.
Product Features
Rich Features of Enterprise-Level High Availability
Data Block Checksum at Hard Disk Level
Deal with silent data corruption through data checksum.
Data Protection at Node Level and Automatic Data Recovery
Data protection is performed among nodes through a multi-replica or erasure coding mechanism; when a component or a node fails, the available space is automatically used, and concurrent data recovery starts among multiple nodes, ensuring that the data redundancy always meets expectations.
Protection at Rack Level
Through rack topology configuration, data is automatically placed on different racks to prevent the cluster from being inaccessible caused by power outage or failure of a single rack, further improving the storage reliability.
High Availability Mechanism for Access Services
Ensures service continuity using mechanisms like iSCSI VIP, NVMe-oF multipathing, and file access IP drift for high availability of access services.
Snapshot Protection
By generating a snapshot for the storage, the data can be quickly restored to the state at the time when the snapshot was taken, ensuring data security.
Cross-Site Active-Active Clustering
Along with high availability configuration at the client, the stretched active-active cluster guarantees zero RPO and near-zero RTO, providing application availability during disasters.
Intelligent Recovery Policy of Business-First
On the premise of ensuring business I/O, adaptively adjust the speed of recovery or migration according to the system load.
Agile Recovery Mechanism
During node upgrade or maintenance, the expected loss of replicas will not trigger data recovery, and the write requests during the offline period of the replicas will complete data recovery with a smaller granularity after the node is restored.
Abnormal Disk Detection and Isolation
Automatically detects and isolates unhealthy, failing or low-life disks to reduce the impact on system performance and operations and maintenance.
Network Fail-Slow Detection and Isolation
Automatically and regularly checks storage and access networks and immediately isolates abnormal nodes and NICs to reduce the impact on system performance.
Outstanding Performance
Fully Distributed Architecture
The distributed architecture eliminates controller bottlenecks, and the concurrent performance increases linearly with the number of nodes.
Proprietary File System Based on Bare Metal Devices
A file system is directly built on bare devices, more suitable for accessing high-performance block storage, avoiding the overhead of the existing Linux file system.
All-Flash Support
Supports all-flash storage environments to fully meet enterprises' needs for high-performance scenarios.
Automatic Tiering of Hot and Cold Data
Cold data automatically sinks to HDD, and hot data remains in the cache layer, fully making use of the advantages of SSD hardware and further improving performance.
Volume Pinning
Prevent performance degradation caused by cache breakdown by storing storage volume data in the cache layer, ensuring a consistent high performance.
High-Performance I/O Link
When Boost mode is enabled on the cluster, the vhost protocol shares memory between Guest OS, QEMU, and ZBS to optimize I/O request processing and data transfer, improving VM performance and reducing I/O latency.
High-Performance Data Transmission
The data is exchanged between cluster nodes through the RDMA protocol, effectively increasing the cluster throughput and reducing the latency.
Expand on Demand
Powerful Expansion Capability
Starting with 3 nodes, the capacity and performance can be easily expanded online in a single storage pool.
Intelligent Data Migration
Dynamically balance the data distribution within the cluster, and quickly restore the balance of data distribution after storage capacity expansion.
Intelligent Data Provision
According to the load status of the cluster capacity, the data is intelligently distributed and dynamically adjusted according to the principles of local-first, topological security, localized provision, and capacity balance, to achieve a balance of high performance and high reliability.
Simple Intelligent Operation & Maintenance
SMTX ZBS uses CloudTower for operation & maintenance management, which provides a unified management portal and overview for multiple clusters, and realizes comprehensive monitoring and alerts.
Learn about CloudTower
Learn about CloudTower