I know we can load parquet file using Spark SQL and using Impala but wondering if we can do the same using Hive. I have been reading many articles but I am still confused.
Simply put, I have a parquet file - say users.parquet. Now I am struck here on how to load/insert/import data from the users.parquet into hive (obviously into a table).
Please advise or point me in right direction if I am missing something obvious.
Creating hive table using parquet file metadata
https://phdata.io/examples-using-textfile-and-parquet-with-hive-and-impala/
Get schema of the parquet file using parquet tools, for details check link http://kitesdk.org/docs/0.17.1/labs/4-using-parquet-tools-solution.html
and build table using the schema on the top of the file, for details check Create Hive table to read parquet files from parquet/avro schema