Sr Data Engineer - Cloud
Bengaluru (Bangalore Urban) IT development
Job description
- Location:Bangalore, Karnataka, India
- Area of InterestEngineer - Software
- Job TypeProfessional
- Technology InterestCollaboration, Video
- Job Id1250565
Business purpose:
Today's enterprises are leveraging data lake architectures to enable new ways to empower analytics, business intelligence, and new product features. We're leveraging the latest GCP and AWS services to build a cutting-edge, highly scalable and cost-effective platform.
An immediate opportunity exists for a Senior level Engineer with a passion for creating outstanding products to help take Cloud calling Analytics team to the next level. We need someone who is passionate about leading change, exploiting and optimizing customer engagement, driving experiences across our Cloud collaboration stakeholders, growing our platform, incorporating new workloads and expanding our capabilities.
What You'll Do:
- Be a key leader and contributor to the design and development of a scalable and cost-effective cloud-based data platform based on a data lake design.
- Be collaborative with team members, Product Management, Architects, data producers and data consumers throughout the company.
- Develop data platform components in a cloud environment to ingest data and events from cloud and on-premises environments as well as third parties
- Build automated pipelines and data services to validate, catalog, aggregate and transform ingested data.
- Build automated data delivery pipelines and services to integrate data from the data lake to internal and external consuming applications and services
- Build and deliver cloud-based deployment and monitoring capabilities consistent with DevOps models
- Keep knowledge and skills current with the latest cloud services, features and best practices
Who You Are:
- Extensive experience in designing and delivering enterprise-grade, high transaction volume, Data Platform as a Services (dPaaS) and experience with Data Lakes, Analysis Services, SQL, Cosmos or an equivalent set of cloud capabilities.
- 6 to 8 year's experience working with data: querying it, wrangling it, moving it, parsing it, cleaning it, transforming it, performing computations on it, securing it, archiving it, and serving it for analysis and visualization – there's nothing about data and data management you haven't seen or done.
- 3 ++ years hands-on experience developing data lake solutions operating on cloud-native infrastructure in public and/or private cloud environments such as AWS, GCP, or Azure.
- Advanced experience in scalable data and full text indexing solutions such as Elastic Search/Logstash/Kibana (ELK stack)
- Experience with data streaming technologies and real time analytics
- Familiar data serialization formats such as JSON, Parquet, and ORC and have an experience-based opinion on when one should be used over another.
- Working experience and detailed knowledge in Python, Java, JavaScript, and/or Perl.
- Knowledge of ETL, Map Reduce and pipeline tools (Glue, EMR, Spark)
- Experience with large or partitioned relational databases (Aurora, MySQL, SQL Server)
- Experience with NoSQL databases
- Agile development (Scrum) experience
- Other preferred experience includes working with DevOps practices, SaaS, IaaS, code management (git), deployment tools (CodeBuild, CodeDeploy, Jenkins, Shell scripting), and Continuous Delivery
Why Cisco
●We connect everything: people, processes, data, and things. We innovate everywhere, taking bold risks to shape the technologies that give us smart cities, connected cars, and handheld hospitals. And we do it in style with unique personalities who aren't afraid to change the way the world works, lives, plays and learns.
●We are thought leaders, tech geeks, pop culture aficionados, and we even have a few purple haired rock stars. We celebrate the creativity and diversity that fuels our innovation. We are dreamers and we are doers.
●