Company: Logistics Product Company
Experience : 10+ years
Responsibilities
Selecting & integrating any Big Data tool & framework required to provide requested capabilities
Designing ETL process
Monitoring & evaluating performance and advising any necessary infrastructure changes including changing the cloud platform
Defining data retention policies
Skills and Qualifications
Proficient understanding of distributed computing principles
Management of Hadoop cluster, with all included services
Ability to solve any ongoing issues with operating the cluster
Proficiency with Hadoop v2, MapReduce, HDFS
Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
Experience with Spark and NoSQL databases, such as HBase, Cassandra, MongoDB
Experience with integration of data from multiple data sources
Knowledge of various ETL techniques and frameworks, like Flume
Experience with various messaging systems, such as Kafka or RabbitMQ
Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
Good understanding of Lambda Architecture, along with its advantages and drawbacks
Experience with Cloudera/MapR/Hortonworks
If interested to explore, please forward your updated resume at the earliest with the below details:
Current CTC:
Expected CTC:
Notice:
No comments:
Post a Comment