Cloudera Developer Training for Spark and Hadoop
Xebia's four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers. Participants learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools.
Learn how to import data into your Apache Hadoop cluster and process it with Spark, Hive, Flume, Sqoop, Impala, and other Hadoop ecosystem tools
Through instructor-led discussion and interactive, hands-on exercises, participants will learn Apache Spark and how it integrates with the entire Hadoop ecosystem, learning:
- How data is distributed, stored, and processed in a Hadoop cluster
- How to use Sqoop and Flume to ingest data
- How to process distributed data with Apache Spark
- How to model structured data as tables in Impala and Hive
- How to choose the best data storage format for different data usage patterns
- Best practices for data storage
Audience and Prerequisites
This course is designed for developers and engineers who have programming experience. Apache Spark examples and hands-on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful. Prior knowledge of Hadoop is not required.
CCA: Spark and Hadoop Developer Certification
CCA175 is a hands-on, practical exam using Cloudera technologies. Each user is given their own CDH5 (currently 5.3.2) cluster pre-loaded with Spark, Impala, Crunch, Hive, Pig, Sqoop, Kafka, Flume, Kite, Hue, Oozie, DataFu, and many others (See a full list). In addition the cluster also comes with Python (2.6 and 3.4), Perl 5.10, Elephant Bird, Cascading 2.6, Brickhouse, Hive Swarm, Scala 2.11, Scalding, IDEA, Sublime, Eclipse, and NetBeans.
Learn more about the CCA Certification Exam here:Â http://www.cloudera.com/content/www/en-us/training/certification/cca-spark.html
CCP: Data Engineer Certification:
This course is an excellent place to start for people working towards the CCP: Data Engineer certification. Although further study is required before passing the exam (we recommend Developer Training for Spark and Hadoop II: Advanced Techniques), this course covers many of the subjects tested in the CCP: Data Engineer exam.
Learn more about the CCP Certification Exam here: http://www.cloudera.com/content/www/en-us/training/certification/ccp-data-engineer.html
- Introduction to Hadoop and the Hadoop Ecosystem
- Hadoop Architecture and HDFS
- Importing Relational Data with Apache Sqoop
- Introduction to Impala and Hive
- Modeling and Managing Data with Impala and Hive
- Data Formats
- Data Partitioning
- Capturing Data with Apache Flume
- Spark Basics
- Working with RDDs in Spark
- Writing and Deploying Spark Applications
- Parallel Programming with Spark
- Spark Caching and Persistence
- Common Patterns in Spark Data Processing
- Preview: Spark SQL
Please note, that you need to bring your own laptop for this training.Â This laptop should meet the following requirements;
- MinimumÂ RAMÂ required: 8GB
- MinimumÂ Free Disk Space: 25GB
- VMware Player 6.x or above (Windows)/VMware Fusion 6.x or above (Mac)
- Student machinesÂ mustÂ have VT-x virtualization support enabled in the BIOS.
- If running Windows XP: 7-Zip or WinZip (due to a bug in Windows XPâ€™s built-in Zip utility)
- Student machinesÂ mustÂ support a 64-bit VMware guest image.
- If the machines are running a 64-bit version of Windows, or Mac OS X on a Core Duo 2 processor or later, no other test is required. Otherwise, VMware provides a tool to check compatibility, which can be downloaded fromÂ http://tiny.cloudera.com/training2