This course will teach you to leverage Pig and Hive for big data to prepare & analyse large data sets on Hadoop to make more informed and timely business decisions. You will learn to increase productivity by avoiding low-level Java coding characteristic of MapReduce, and rapidly begin extracting business value for competitive advantage. In this Pig & Hive for Big Data training course, you will learn to gain access to previously inaccessible data, gather and feed data into Hadoop for storage, transform and filter data using Pig, and extract value using Hive and Spark SQL.
Learning Tree erbjuder kundanpassad utbildning hos er, öppna kurser i Stockholm, London eller Washington, möjlighet att delta via våra Anywhere centers (Malmö, Göteborg, Linköping, Stockholm eller Borlänge) eller olika former av e-learning med lärarstöd. Läs mer på www.learningtree.se/priser .
Kurs med startgarantiNär du ser symbolen för “Guaranteed to Run” vid ett kurstillfälle vet du att kursen blir av. Garanterat.
Storing data in HDFS
Parallel processing with MapReduce
Automating data transfer
Describing characteristics of Apache Pig
Structuring unstructured data
Transforming data with Relational Operators
Filtering data with Pig
Leveraging business advantages of Hive
Organising data in Hive Data Warehouse
Designing data layout for maximum performance
Performing joins on unstructured data
Pushing HiveQL to the limit
Deploying Hive in production
Streamlining storage management with HCatalog
Hadoop programming at the low level is done in Java. Pig and Hive provide ease of programming by allowing the programmer to write scripts in a simpler language, Pig Latin or HiveQL. Those scripts are compiled and optimised internally and equivalent Java code generated and executed without the programmer having to write the Java code.
Data science algorithms need to ingest data from an appropriate storage technology like a relational database, NoSQL database, or Hadoop distributed file system. Before this data can be fed to the algorithm it has to be cleaned and structured. The data wrangling stage is where Pig and Hive are particularly useful.?
Apache Pig is a platform for analysing large data sets. Programs are written in a high-level, Pig Latin. They are converted by Pig's infrastructure into sequences of Java MapReduce programs which are then executed on Hadoop. Without writing Java one can use Pig to leverage Hadoop's ability to process data in parallel.
Apache Hive is data warehouse software that translates commands written in a SQL-like language, HiveQL, into Hadoop MapReduce jobs that are then executed on Hadoop. Without writing Java one can use Hive to leverage Hadoop's ability to process data in parallel.
Pig is typically used early in the data pipeline to clean and structure data. Hive is typically used later when there is structure and well-defined fields. Since Hive has the concepts of tables, rows and columns it integrates easily with BI tools.
Yes! We know your busy work schedule may prevent you from getting to one of our classrooms which is why we offer convenient online training to meet your needs wherever you want, including online training.
Questions about which training is right for you?08-506 668 00
Your Training Comes with a 100% Satisfaction Guarantee!*
*Partner-delivered courses may have different terms that apply. Ask for details.