Hardware Requirements

There are some hardware limitations and requirements. Some of them will be resolved in the future releases.

Hardware Limitations

Graphics Processing Unit

You will need an NVIDIA graphic card with Compute Capability 6.0 or higher. You can find a list of NVIDIA graphic cards and their compute capabilities here. Model example of GPU Optimized Instance: GPUs: 4x NVIDIA Tesla™ V100 16G Passive GPU CPU: 2x Intel® Xeon® Platinum 8253 2.2G, 16C/32T, 10.4GT/s, 22M Cache, Turbo, HT (125W) DDR4-2933 Memory: 30x 8GB (total: 240GB) RDIMM, 2666MT/s, Single Rank Total cost: 62,850.16 $

Block Size

The data is divided into blocks of data. The block size represents how many rows of the column will be stored in one block of data. The block size is limited by graphic card's VRAM. The single block cannot be divided into multiple GPUs, so the single graphic card has to have enough VRAM to store at least one block of data. The data type of a block has to be taken into account as well. The Integer32 block of data naturally needs a smaller amount of VRAM than block of data with Integer64 data type. It is possible to calculate optimal block size, but there is a lot of things that affect it. In our experience, it is faster to find the maximum block size empirically than to calculate it. If you want to know, how to setup the optimal block size according to your data and queries, read Tip & Tricks - Block Size.

Memory Requirements

Operation Memory (RAM) Limitations

Currently there are some RAM capacity limitations. The first one can be spotted when importing large databases from .csv files. The second one is during the operation of database.

Importing Large .CSV Files

When you want to import a large database, it is needed to divide the single large .csv file into multiple smaller parts and import them sequentially. We will support splitting the large .csv file into smaller ones and importing them sequentially on our side (so that the user will not have to do this) in the near future. For the reason of copying data, you will need about 2 times as much free space on RAM as is the size of the CSV file.

Operation of Database

Currently we do not support swapping data into disk, so there has to be enough free space of RAM to handle all of the data. For the correct operation of the database core (server side), you will need the amount of free memory equivalent to 1.5x size of saved database files (.db, .col) on disk. In the future, we will support the functionality to load data from disk on demand and we will reduce the memory usage. For example, 1.2B rows of Integer_32 data in one column takes 4.7 GB of disk space and if loaded into memory, it takes about 7.05 GB of memory (RAM) capacity. We are currently working on reducing memory (RAM) usage when database is loaded.

How Block Size Affects Memory Usage

The block size is set per database (this is will be used as the default value for tables which do not have specified their block sizes) and per each table. If the block size for a particular table is specified, this specified value will be used as block size per that table. The block size cannot be set per columns. It is ideal to have large block size for tables with huge amounts of data and small block size for tables with small amounts of data. If you set a huge block size, e.g. 300M, for a table which has small amount of data, e.g. 10K rows, there will be allocated a block with size of 300M for those 10K rows of data, which means it will have a huge comsuption of memory (RAM) pointlessly. So how to set a block size? If you have columns in a table which have a similar amounts of data and you do not use indexing, the best block size is set as splitting the data into as many blocks as there are GPU cards. Let show it on an exaple: If we have database with just one table with 3 columns, each column have 500M rows of data and you have 2 graphic cards available for querying and the indexing is turned off, the best block size is the largest one, which means 500M/2 GPUs = 250M block size. There is a limitation that the whole block has to fit into VRAM (GPU memory). If it does not fit, you have to choose a smaller block size. If you use indexing, we recommend you to choose block size empirically based on experimenting with different block sizes, because data has a huge impact on performace when indexes are used.

Hardware Impact on Performance

The following hardware components are ordered from the most important to the least important component in terms of speed impact.

GPU

It is very important to have a graphic processing unit with the following specifications (order from the highest important to the lowest):

  • a lot of stream multiprocessors,

  • a large bus width,

  • high throughput,

  • the architecture is important, we recommend to have at least a Pascal architecture or newer, but it will work with the older architectures as well,

  • a lot of VRAM.

RAM

The volume of RAM is more important than its speed.

CPU

The most important in terms of speed is to have a CPU with at least as many cores as there are graphic cards, because each core handles one graphic card. Once you meet this condition, then the frequency of the cores becomes the second important parameter.

STORAGE

When there is not enough memory (RAM) for handling all the data, the database starts swapping data to disk and that is when a speed of a disk starts to be important in terms of speed of the queries. The disk speed is always important when data are loading from disk to memory and saving from memory into disk.