• The transfer/conversion of large-capacity big data
• RDB, sensor data, AWS (DDB, S3, Athena), Hadoop
• Conversion of data transfer during Data Lake operations that collect various large-capacity data

Main features

• Supports various Big Data such as Hadoop, Hbase, Impala, and Hive
• Supports Parquet and Avro files, which are optimized for Hadoop and e-Commerce.
• Provides functions such as data aggregation and arrangement and automatic data type conversion
• Provides troubleshooting service through resource (CPU, Memory, Disk Space, etc.) management
• Enables to realize a non-stop platform with HA Clustering
• Enables to select configuration management by GIT or SVN