Det finns inga varor i din kundvagn. Fortsätt handla på vår Kurskatalog sida.
Betala Nu -ELLER- betala Senare
Betala med kreditkort nu
-ELLER - betala mot faktura som vanligt.
In this Hadoop Architecture and Administration big data training course, you gain the skills to install, configure, and manage the Apache Hadoop platform and its associated ecosystem, and build a Hadoop big data solution that satisfies your business and data science requirements. You will learn to install and build a Hadoop cluster capable of processing very large data sets, then configure and tune the Hadoop environment to ensure high throughput and availability.
Additionally, this course will teach attendees how to allocate, distribute and manage resources; monitor the Hadoop file system, job progress and overall cluster performance; as well as exchange information with relational databases.
TRAINING AT YOUR SITE
Learning Tree erbjuder kundanpassad utbildning hos er, öppna kurser i Stockholm, London eller Washington, möjlighet att delta via våra Anywhere centers (Malmö, Göteborg, Linköping, Stockholm eller Borlänge) eller olika former av e-learning med lärarstöd. Läs mer på www.learningtree.se/priser .
Installing the Hadoop Distributed File System (HDFS)
Setting the stage for MapReduce
Planning the architecture
Building the cluster
Preparing HDFS
Deploying MapReduce
Creating a fault–tolerant file system
Leveraging NameNode Federation
Introducing YARN
Allocating resources
Maintaining HDFS
Administering MapReduce
Employing the standard built–in tools
Tuning with supplementary tools
Simplifying information access
Integrating additional elements of the ecosystem
Facilitating generic input/output
Acquiring application–specific data
Yes! We know your busy work schedule may prevent you from getting to one of our classrooms which is why we offer convenient online training to meet your needs wherever you want, including online training.
A data science algorithm will ingest data from an appropriate storage technology like a relational database, MongoDB, Hadoop distributed file system into R or Python for data wrangling and model building. If the amount of data is large execution is performed in parallel using Spark. The results will often be visualised by the end user on dashboards.
Your Training Comes with a 100% Satisfaction Guarantee!*
*Partner-delivered courses may have different terms that apply. Ask for details.