Offers “HSBC”

Expires soon HSBC

Big Data DevOps Engineer

  • City of London (Greater London)
  • IT development

Job description

BIG Data DevOps Engineer

Big Bank Funding. FinTech Thinking.

Our technology teams in the UK work closely with HSBC's global businesses to help design and build digital services that allow our millions of customers around the world, to bank quickly, simply and securely. We also run and manage our IT infrastructure, data centres and core banking systems that power the world's leading international bank.

Our multi-disciplined teams include: DevOps engineers, IT architects, front and back end developers, infrastructure specialists, cyber experts, as well as project and programme managers.

We work in small, agile DevOps teams with colleagues around the world from our offices at the Bluefin Building in Southwark, our global headquarters in Canary Wharf, and multiple other locations around the UK including Sheffield, Leeds, Barnsley and Birmingham.

Following extensive investment across our Technology and Digital domains and with plans for continued expansion throughout 2019 and beyond, we are currently seeking a number of experienced BIG Data DevOps Engineers to join HSBC Technology.


Business Area Overview


GB&M Big Data is a Global Markets and Banking Initiative that is part of the Group Data Strategy to transform the way we govern, manage and use all our data to its full potential across HSBC.

Assets that are being developed as part of GB&M Big Data are being designed to support HSBC at a Group level. These assets include the creation of a Data Lake for GBM and CMB, or a single virtual pool of client, transaction, product, instrument, pricing and portfolio data. Using the lake deliver solution to business requirement using Big Data as business as service.

What you will be doing;

DevOps Data Engineer will be part of core big data technology and design team. Person would be entrusted to developed solutions/design ideas, identify design ideas to enable the software to meet the acceptance and success criteria. Work with architects/BA to build data component on the Big data environment

As a key member of the technical team alongside Engineers, Data Scientists and Data Users, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process:


You will be responsible for
·
Software design, java development, automated testing of new and existing components in an Agile, DevOps and dynamic environment

·
Promoting development standards, code reviews, mentoring, knowledge sharing

·
Product and feature design, scrum story writing

·
Data Engineering and Management

·
Product support & troubleshooting

·
Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring

·
Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Liaison with Testers to ensure that they understand how requirements have been implemented – so that they can be effectively tested.

·
Participation in regular planning and status meetings. Input to the development process – through the involvement in Sprint reviews and retrospectives. Input into system architecture and design.

·
Peer code reviews.

·
3rd line support.

Desired profile

Qualifications :

What you will bring to the role;

To be successful in this role you should have proven experience within the Technology sector with knowledge of the following skills:

· 
Experienced in Java, Scala and/or Python, Unix/Linux environment on-premises and in the cloud

· 
Java development and design using Java 1.7/1.8. Advanced understanding of core features of Java and when to use them

· 
Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, HBase, Presto, Python, ETL frameworks, MapReduce, SQL, RESTful services).

· 
Sound knowledge on working Unix/Linux Platform

· 
Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Pig, Spark, Spark SQL.

· 
Must have experience with developing Hive QL, UDF's for analysing semi structured/structured datasets

· 
Experience with time-series/analytics db's such as Elastic search

· 
Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA

· 
Exposure to Agile Project methodology but also with exposure to other methodologies (such as Kanban)

· 
Understanding of data modelling techniques using relational and non-relational techniques

· 
Coordination between Onsite and Offshore

· 
Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects;

· 
Understanding or experience of Cloud design patterns

These roles will primarily be London site based at our Head Office in Canary Wharf


Come Power a Business that Defines How to Power the World

As a business operating in markets all around the world, we believe diversity brings benefits for our customers, our business and our people. This is why HSBC UK is committed to being an inclusive employer and encourages applications from all suitably qualified applicants irrespective of background, circumstances, age, disability, gender identity, ethnicity, religion or belief and sexual orientation.

We want everyone to be able to fulfil their potential which is why we provide a range of flexible working arrangements and family friendly policies.

As an HSBC employee in the UK, you will have access to tailored professional development opportunities and a competitive pay and benefits package. This includes private healthcare for all UK-based employees, enhanced maternity and adoption pay and support when you return to work, and a contributory pension scheme with a generous employer contribution.

Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website.

Make every future a success.
  • Job directory
  • Business directory