Emploi

Trouvez facilement votre premier job

Découvrir

L'actualité professionnelle des 18-30 ans

Découvrir
Finance

Découvrez les aides financières auxquelles vous êtes éligible

Découvrir
Santé

La mutuelle qui prend soin de la santé des jeunes

Découvrir
Mobilité

Révisez le code de la route à partir de 9,90€

Découvrir
Logement

L'assurance habitation sans engagement dédiée aux jeunes

Découvrir

Offers “Hp”

Expires soon Hp

Devops – Big data/Artificial Intelligence/Machine Learning

  • Internship
  • Bangalore ( Bangalore Urban )

Job description

Devops – Big data/Artificial Intelligence/Machine Learning



Job Description:



BlueData is transforming how enterprises deploy AI, Machine Learning, and Big Data Analytics. Now we’re part of the HPE family, a start-up within a big company that is focused on delivering innovative AI/ML solutions for enterprises of all sizes. We are looking for an entrepreneurial mind-set, a go-getter attitude with a passion to create delightful user experiences for the new age, hybrid cloud centric era. This is a new and growing team at HPE in which you will be building cloud compatible software, and applications for Machine Learning and Deep Learning deployment. You will be at the cutting edge of distributed Big Data systems, AI and Deep Learning frameworks, hardware accelerators such as GPU/FPGA and Kubernetes container orchestration.

We are looking for a big-data platform engineer to design and develop our big-data application infrastructure. This role involves development and support of the various big-data applications and frameworks on the BlueData EPIC platform that includes installation, configuration, and management of Hadoop/Spark job execution frameworks, distributed file systems, NoSQL databases, and SQL-on-Hadoop systems.

Desired Skills & Experience/Qualifications:

· BE, ME in Computer Science or equivalent
· Experience with large data sets and distributed computing
· Hands-on experience with the Hadoop stack (e.g. MapReduce, Sqoop, Pig, Hive, HBase, Flume)
· Experience in working on large linux clusters
· Hands-on experience with production Hadoop systems (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
· Knowledge of NoSQL platforms (e.g. key-value stores)
· Hands-on experience with open source software platforms and languages (e.g. Java/Scala, Python)
· Familiarity with AI/ML frameworks
· Knowledge of cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce)
· Knowledge the data warehousing and Business Intelligence systems
· Self-starter, fast learner, and the ability to work in a fast-paced environment.

• Hewlett Packard Enterprise Values:

Partnership first: We believe in the power of collaboration - building long term relationships with our customers, our partners and each other

Bias for action: We never sit still - we take advantage of every opportunity

Innovators at heart: We are driven to innovate - creating both practical and breakthrough advancements

What do we offer?

Extensive social benefits, flexible working hours, a competitive salary and shared values, make Hewlett Packard Enterprise one of the world´s most attractive employers. At HPE our goal is to provide equal opportunities, work-life balance, and constantly evolving career opportunities.

If you are looking for challenges in a pleasant and international work environment, then we definitely want to hear from you. Apply now below, or directly via our Careers Portal at www.hpe.com/careers

You can also find us on:

https://www.facebook.com/HPECareers

https://twitter.com/HPE_Careers

Job:
Engineering

Job Level:
Intermediate



Hewlett Packard Enterprise is EEO F/M/Protected Veteran/ Individual with Disabilities.



HPE will comply with all applicable laws related to the use of arrest and conviction records, including the San Francisco Fair Chance Ordinance and similar laws and will consider for employment qualified applicants with criminal histories.