Hive : Different types of file formats supported by Hive

Hive @

Apache Hive supports a variety of file formats to store and process data. These file formats can be categorized into text-based formats, binary formats, and columnar formats. Each file format has its own advantages and trade-offs in terms of storage, compression, and query performance. Here is a list of some common file formats supported by Hive:

  1. Text-based Formats:
    • TextFile: The default file format in Hive. It stores data as plain text files with one record per line. TextFile is human-readable and easy to parse, but it usually results in larger files and slower query performance compared to other formats.
    • SequenceFile: A flat file format that stores data in a binary key-value format. SequenceFiles are more efficient than TextFiles in terms of storage and query performance, but they are not human-readable.
  2. Binary Formats:
    • Avro: A row-based binary file format that is flexible, compact, and schema-aware. Avro files store the schema along with the data, which makes it easier to read and write data with evolving schemas. Avro also supports data compression and provides good query performance.
    • Parquet: A columnar storage file format optimized for use with Hadoop and Hive. Parquet stores data in a column-wise manner, which enables better compression and more efficient querying. It is particularly suitable for analytical workloads where a subset of columns is accessed frequently.
  3. Columnar Formats:
    • ORC (Optimized Row Columnar): A columnar storage file format developed specifically for Hive. ORC improves upon the features of other columnar formats like Parquet by providing better compression, faster query performance, and built-in support for complex data types. It also includes lightweight compression algorithms like Zlib or Snappy.
    • RCFile (Record Columnar File): A columnar storage file format that predates ORC. RCFile stores data in a columnar manner like Parquet and ORC, which allows for better compression and more efficient querying. However, ORC has largely replaced RCFile due to its improved performance and features.

To store data in a specific file format, you can use the STORED AS clause when creating a table in Hive. For example, to create a table using the Parquet file format, you can use the following statement:

CREATE TABLE my_table (id INT, name STRING, age INT)

Choosing the right file format for your use case depends on various factors, such as the nature of the data, the type of queries you run, and the storage and performance requirements of your application. Columnar formats like ORC and Parquet are often recommended for analytical workloads, while Avro is a good choice for situations where schema evolution is important.

Hive important pages to refer

  1. Hive
  2. Hive Interview Questions
  3. Hive Official Page
  4. Spark Examples
  5. PySpark Blogs
  6. Bigdata Blogs
  7. Spark Interview Questions
  8. Spark Official Page
Author: user

Leave a Reply