Back to all jobs

Senior Data EngineerRemote

LocationBeaverton, OR 97006 - United States
Work TypeContract/Temp
Positions1 Position
Salary RangeUS$64 - 69 per hour
  • AWS
  • python
  • Database Management
  • Database Modeling
Category: Insights & Analytics
  • Innovative Technology; High Quality Products, Self-Empowerment
  • Globally Responsible; Sustainable Products, Diversity of Thought
  • Celebration of Sports; If You Have a Body, You are an Athlete

Title: Sr. Data Engineer

Location: Remote, US

Duration: 10 month contract

Responsibilities

  • Establishes database management systems, standards, guidelines and quality assurance for database deliverables.
  • Documents and communicates database design.
  • Evaluates and installs database management systems.
  • Codes complex programs and derives logical processes on technical platforms.
  • Builds windows, screens and reports. Assists in the design of user interface and business application prototypes.
  • Participates in quality assurance and develops test application code in client server environment.
  • Provides expertise in devising, negotiating and defending the tables and fields provided in the database.
  • Adapts business requirements, developed by modeling/development staff and systems engineers, and develops the data, database specifications, and table and element attributes for an application.
  • Helps to develop an understanding of client's original data and storage mechanisms.
  • Determines appropriateness of data for storage and optimum storage organization.
  • Determines how tables relate to each other and how fields interact within the tables for a relational model.

Skills

  • 5+ years’ experience in Big Data stack environments like AWS EMR, Clourdera, Hortonworks
  • 3+ years of Spark in batch and streaming mode
  • 3+ years of experience in scripting using Python
  • 3+ years of experience working on AWS Cloud environment
  • In-depth knowledge of Hive and S3
  • Good to have skills include Streaming solutions like Kafka, AWS Lambda, API Gateway, NoSQL like Dynamo
  • Strong understanding of Hadoop, MPP systems and data structures
  • Strong understanding of solution and technical design
  • Experience building cloud scalable high-performance data lake solutions
  • Experience with relational SQL & tools like Snowflake
  • Aware of Data warehouse concepts
  • Performance tuning with large datasets
  • Experience with source control tools such as GitHub and related dev processes
  • Experience with workflow scheduling tools like Airflow
  • Strong problem solving and analytical mindset
  • Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders