‹ Back to all jobs

Machine Learning 4Remote

LocationSalem, OR 97301 - United States
Work TypeContract/Temp
Positions1 Position
Published At:12 days ago
  • SQL
  • AWS
  • python
  • Spark
  • Machine Learning

REMOTE, USA

Category: Technology
  • Innovative Technology; High Quality Products, Self-Empowerment
  • Globally Responsible; Sustainable Products, Diversity of Thought
  • Celebration of Sports; If You Have a Body, You are an Athlete

Title: Machine Learning 4

Location: Remote / Salem, OR

Duration: 1 year contract

NIKE, Inc. does more than outfit the world's best athletes. It is a place to explore potential, obliterate boundaries and push out the edges of what can be. The company looks for people who can grow, think, dream and create. Its culture thrives by embracing diversity and rewarding imagination. The brand seeks achievers, leaders and visionaries. At Nike, it’s about each person bringing skills and passion to a challenging and constantly evolving game.

WHAT YOU WILL WORK ON

  • Develop and program integrated software algorithms to structure, analyze, and leverage data in product and systems applications across structured and unstructured environments.
  • Design and communicate descriptive, diagnostic, predictive, and prescriptive insights/algorithms.
  • Use machine learning (ML) and statistical modeling techniques (e.g., decision trees, logistic regression, Bayesian analysis) to enhance product/system performance, quality, data management, and accuracy.
  • Translate algorithms and technical specifications into code using Python and other current programming languages and technologies.
  • Implement, test, debug, and optimize algorithms, ensuring efficiency and accuracy.
  • Complete documentation and procedures for software installation, maintenance, and updates.
  • Apply deep learning technologies to enable visualization, learning, and response to complex scenarios.
  • Adapt ML to applications in virtual reality, augmented reality, artificial intelligence, robotics, and interactive user experiences.
  • Work with large-scale computing frameworks, data analysis systems, and modeling environments.


WHAT YOU BRING

  • Proficient in Python and Spark, with experience in high-performance, parallel, and distributed computing to scale Machine Learning solutions.
  • Familiar with cloud platforms such as AWS, GCP, and Azure, and experienced in deploying Machine Learning models using these platforms.Should have working experience with data science platforms like SageMaker and similar offerings from other providers.
  • Skilled in building and maintaining Machine Learning pipelines using tools like Airflow or Kubeflow, and tracking experiments with tools such as mlflow, TensorBoard, and SageMaker Experiments.
  • Understands model explainability and monitoring, and is proficient in scaling machine learning models to handle large datasets and high-dimensional feature spaces.
  • Experienced with distributed computing frameworks like Apache Spark and GPU acceleration tools like CUDA for efficient model training and inference.
  • Knowledgeable in techniques for model compression, quantization, and optimization for deployment in resource-constrained environments.
  • Proficient in using data visualization libraries like Matplotlib, Seaborn, Plotly, and tools like Tableau, Splunk, and SignalFX for analyzing logs, metrics, and creating dashboards.
  • Familiar with different API architectural styles like REST, Websocket, gRPC, SOAP, etc.
  • Experience with continuous integration/continuous deployment (CI/CD) tools such as Jenkins.
  • Familiar with Infrastructure as Code (IaC) tools such as Terraform for creating, updating, and versioning infrastructure safely and efficiently.
  • Experience with version control systems like Git for tracking changes in source code during software development. • Familiar with secure cloud environments such as AWS, GCP, or Azure, and their respective security services and best practices. This includes knowledge of IAM roles, security groups, VPCs, encryption, and compliance standards.
  • Hands-on experience with secure model deployment tools like Docker and Kubernetes, understanding of network security for data transit, and knowledge of secure data storage and handling practices.

Typically requires a Bachelors Degree and minimum of 9 years directly relevant work experience

Note: One of the following alternatives may be accepted: PhD or Law + 6 yrs; Masters + 7 yrs; Associates degree + 9 yrs


  • Published on 04 Sep 2024, 3:49 AM