Offers “Amazon”

Expires soon Amazon

Big Data Solution Architect

  • Seattle (King County)
  • Architecture / Town planning

Job description

DESCRIPTION

Job Description: Big Data Solution Architect – SEA -L6 (DP-1)
Are you a Big Data/data lake specialist? Do you have Data Warehousing, data lake build, Hadoop/Hive/Spark/EMR learning experience? Do you want to have an impact in the development and use of new data analytics technologies? Would you like a career that gives you opportunities to help customers and partners use Cloud computing services to do build new solutions, faster, and at lower cost?
Responsibilities Include:
• Design, implement and deliver complete analytic solutions for customers.
• Own the data architecture and infrastructure.
• Architect, build and maintain high performing ETL processes, including data quality and testing
• Define and build technical/data architecture for data warehouse, data marts and big data solutions (including data and dimensional modeling)
• Develop analytics with a mind toward accuracy, scalability and high performance
• Provide technical guidance and thought leadership to other programmer analysts in the team.
• Establish lean data governance to drive data standards and data quality. Be an evangelist in the company for data-informed thinking and decision making.

Desired profile

BASIC QUALIFICATIONS

• Hands-on experience with designing and implementing distributed architecture systems to tera/petabyte data handling using OpenSource software.
• Expert knowledge in modern distributed architectures and compute / data analytics / storage technologies on AWS
• Advanced knowledge of a programming language such as Java/Python/Scala
• Understanding of architectural principles and design patterns / styles using parallel large-scale distributed frameworks such as Hadoop / Spark
• Deep knowledge of RDBMS (MySQL, PostgreSQL, SQL Server) and NoSQL databases such as HBase, Vertica, MongoDB, DynamoDB, Cassandra
• Demonstrates broad knowledge of technical solutions, design patterns, and code for medium/complex applications deployed in Hadoop production.
• Knowledge of working in UNIX environment with good amount of shell scripting and python experience. Knowledge in spring, Core Java, MapReduce.
• Hands on experience designing, developing, and maintaining software solutions in Hadoop Production cluster.
• Experience in architecting and building data warehouse systems and BI systems including ETL.
• Outstanding analytical skills, excellent team player and delivery mindset.
Experience in performance troubleshooting, SQL optimization, and benchmarking.Strong architectural experience in context of deploying cloud-based data solutions.
Thorough understanding of service-oriented architectures and data processing in high-volume applications. Full SDLC experience (requirements gathering through production deployment).

Make every future a success.
  • Job directory
  • Business directory