Offers “Accenture”

Expires soon Accenture

Technology-Big Data Engineer

  • Internship
  • IT development

Job description

Key responsibilities may include

· Process and rationalize structured data, message data and semi/unstructured data and ability to integrate multiple large data sources and databases into one system Design optimal data warehouse and analytics solution to fulfill the business/technical requirement
· Demonstrate a continual desire to implement “strategic” or “optimal” solutions and where possible, avoid workarounds or short-term tactical solutions
· Work with team through problem definition, issue identification and work plan development using problem solving principles and past experience
· Identify the gap between business requirement and data architecture and propose solution to resolve or shorten the gap
· Manage stakeholder expectations and ensure that robust communication and escalation mechanisms are in place across the project portfolio
· Design the common data architecture asset including framework, principle, guideline and template to guide the delivery team realize an optimized solution

Desired profile

Qualifications :

·  2 to 6 years of experience in data engineering
·  Proficient understanding of distributed computing principles, the fundamental design principles behind a scalable application, and the Big Data Hadoop eco system
·  Practical expertise in developing applications and using querying tools on top of Spark and Hive
·  Experience with Scala or Java
·  Experience with Kafka
·  Good knowledge of SDLC and formal Agile processes, a bias towards TDD and a willingness to test products as part of the delivery cycle
·  Practical experience in using HDFS
Preferred Skill Requirements:
·  Knowledge of and experience using data models and data dictionaries in a Banking and Financial Markets context. Knowledge of Trade Finance or Securities Services particularly useful.
·  Stakeholder management, team management
·  Experience in working in Teams using the Agile Methods (SCRUM) and Confluence/JIRA
·  Knowledge and experience in Data Quality & Governance
·  Experience with Hortonworks/Cloudera platforms
·  Knowledge of at least one Python web framework (preferably: Flask, Tornado, and/or twisted)
·  Experience in Python, particularly the Anaconda environment
·  Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 would be a plus
·  Experience with using GIT/GITLAB as a version control system
·  Experience of Continuous Integration/Continuous Deployment (Jenkins/Hudson/Ansible)
·  Knowledge of Elastic Search Stack (ELK)
·  Experience with Google Cloud Platform (Data Proc / Dataflow)

Make every future a success.
  • Job directory
  • Business directory