Greetings from Tech Mahindra !!!
Job Description
Job Location:Hyderabad/Bangalore/Pune
Total of 8+ Years of experience in BI & DW with at least 4 â" 6 years of experience in Big Data implementations
ï¶ Understand business requirements and convert them into solution designs
ï¶ Architecture, Design and Development of Big Data data Lake Platform.
ï¶ Understand the functional and non-functional requirements in the solution and mentor the team with technological expertise and decisions.
ï¶ Produce a detailed functional design document to match customer requirements.
ï¶ Responsible for Preparation, reviewing and owning Technical design documentation.
ï¶ Code reviews, and preparing documents for Big Data applications according to system standards.
ï¶ Conducts peer reviews to ensure consistency, completeness and accuracy of the delivery.
ï¶ Detect, analyse, and remediate performance problems.
ï¶ Evaluates and recommends software and hardware solutions to meet user needs.
ï¶ Responsible for project support, support mentoring, and training for transition to the support team.
ï¶ Share best practices and be consultative to clients throughout duration of the project.
ï¶ Hands-on experience in working with Hadoop Distribution platforms like HortonWorks, Cloudera, MapR and others.
ï¶ Take end-to-end responsibility of the Hadoop Life Cycle in the organization
ï¶ Be the bridge between data scientists, engineers and the organizational needs.
ï¶ Do in-depth requirement analysis and exclusively choose the work platform.
ï¶ Full knowledge of Hadoop Architecture and HDFS is a must
ï¶ Working knowledge of MapReduce, HBase, Pig, MongoDb, Cassandra, Impala, Oozie , Mahout, Flume, Zookeeper/Sqoop and Hive
ï¶ In addition to above technologies , understanding of major programming/scripting languages like Java, Linux, PHP, Ruby, Phyton and/or R
ï¶ He or she should have experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms
ï¶ Must have minimum 3+ years hands-on experience in one of the Big Data Technologies (I.e. Apache Hadoop, HDP, Cloudera, MapR)
o MapReduce, HDFS, Hive, Hbase, Impala, Pig, Tez, Oozie, Scoop
ï¶ Hands on experience in designing and developing BI applications
ï¶ Excellent knowledge in Relational, NoSQL, Document Databases, Data Lakes and cloud storage
ï¶ Expertise in various connectors and pipelines for batch and real-time data collection/delivery
ï¶ Experience in integrating with on-premises, public/private Cloud platform
ï¶ Good knowledge in handling and implementing secure data collection/processing/delivery
ï¶ Desirable knowledge with the Hadoop components like Kafka, Spark, Solr, Atlas
ï¶ Desirable knowledge with one of the Open Source data ingestion tool like Talend, Pentaho, Apache NiFi, Spark, Kafka
ï¶ Desirable knowledge with one of the Open Source reporting tool Brit, Pentaho, JasperReport, KNIME, Google Chart API, D3
Interested resources do share updated profile with below details
Current CTC:
Expected CTC :
Passport number:
Notice period:
Current Location:
Preferred Location:
No comments:
Post a Comment