5+ years ETL experience with data warehousing, data modeling, and enterprise BI ETL development (Informatica, IBM DataStage, Oracle Data Integrator, etc.).
3+ yearsâ€™ experience with Agile methodologies in a business intelligence or data modeling / warehouse context (e.g. Kimball methodology).
Knowledge in UNIX scripts and Windows based BI platforms a big plus.
3+ years of solid experience working with Cloudera Hadoop
Should be able to design, develop and review medium to high complexity HIVE & Spark scripts independently
Working experience in PIG, HIVE, Map Reduce and Hadoop Distributed File Systems (HDFS)
Hands on experience on major components of Hadoop Ecosystem like HDFS, HIVE, PIG, Oozie, Sqoop, Map Reduce and YARN.
Developed scripts, numerous batch jobs to schedule various Hadoop programs.
Experience in analyzing data using HiveQL, PIG Latin, and custom MapReduce programs in Java.
Worked on importing and exporting data from different databases like Oracle, Mysql into HDFS and Hive using Sqoop.
Experience in collecting and storing stream data like log data in HDFS using Flume/Kafka
- Understanding of application architecture and technology infrastructure preferred
- Able to learn technical concepts quickly and apply them effectively in the workplace
- Able to adapt to changing business requirements and react quickly
- Strong customer focus and results oriented attitude
- Self-motivated individual, able to work independently and manage numerous deliverable in parallel
Please revert with your updated resume in doc/pdf format to proceed further to firstname.lastname@example.org