Hi,
Greetings!!!
We have an excellent opportunity into Big Data Development with one of our prestigious client which is into Investment Banking.
Job location: Pune
Experience: 5-12 years
Required Skills: Big Data, Hadoop, Spark, Java & Python/Scala.
Please find below the JD for your reference -
Job Purpose:
⢠Job position requires an Application Developer with 5 - 12 years of experience and strong Hadoop and Spark hands on experience and knowledge / understanding of Data warehousing, ETL and SQL concepts
⢠Person would be required to report to Application Lead/Manager
⢠The position is based in Pune, India
⢠Candidate will also get the chance to work with Global tech leads in other project initiative as well
⢠Applies skills and knowledge of the business to develop creative solutions to meet client and business needs
⢠The candidate will work with complex and variable issues with substantial potential impact, weighing various alternatives and balancing potentially conflicting needs
⢠Opportunity to work on business complex and critical Financial Service application deployed globally
Key Responsibilities:
⢠Responsible for design, programming, deployment, testing and maintenance of one or more applications on Hadoop and Spark platforms
⢠Design and development on Hadoop ecosystem
⢠Physical Layout design and implementation using HDFS and file formats
⢠Hands-on experience in SQL and ability to perform investigation of data issues, data scrubbing, data cleansing and quality rules
⢠Experience with data migration between (RDBMS) traditional Systems and Big Data platforms
⢠Ability to reference and understand system and application documentation without assistance from others
⢠Strong background in data modelling techniques, data warehousing and logical modelling best practices on Big Data Platform
⢠Participate in application design sessions and provide valuable insights
Knowledge/Experience:
⢠Developer with 5 - 12 years of experience in Big Data ecosystem with at least 4 years of experience in Big Data as Primary Skill.(Hadoop, Hive, Spark)
⢠Good understanding in data mining, machine learning, natural language processing, or information retrieval
⢠Expert in Big Data querying tools e.g. Hive and Impala
⢠Minimum 4 years of Experience with Java/J2EE- optional
⢠Minimum 4 years of Experience with Spark programming in python or Scala, oozie workflow scheduler
⢠Minimum 2 years of Experience with ML toolkits, such as SparkML, Sklearn or H2O
⢠Experience with building stream-processing systems, using solutions such as Spark-Streaming, Flume, Kafka
⢠Experience with NoSQL databases, such as HBase, MongoDB is a plus
⢠Experience processing large amounts of structured and unstructured data
⢠Big Data performance optimization would be an added advantage
Secondary Skills:
⢠NoSQL (MongoDB, Cassandra), Kafka, Python
⢠Hadoop security and familiarity with CDH distribution
⢠Exposure to Data Visualization Tools
Competencies:
⢠Requires good communication, presentation and written skills.
⢠Good Analytics & Problem solving skills
⢠Should be a good team player
Qualifications:
⢠MCA/B.Tech/M.E./M.Tech (in computers, information technology or engineering)
If your profile suit above requirement then please share your updated resume including below details:
Total Experience:
Relevant Experience:
Current Salary:
Expected Salary:
Notice Period:
Please Note: In case if your profile does not suit this requirement then kindly ignore this email as it is a mass mail.
No comments:
Post a Comment