DevOps Engineer (A.S) #127

  • DevOps
  • Full Time
  • Remote (Remote)
  • 36,000~64,800 USD / Year
  • This position has been filled

(A.S)

Linux/Bash, Git, AWS, Docker, Python, Jenkins / Nexus, Terraform, and JavaScript.

Company Overview

The company enables smarter decision-making by accelerating the flow of data-driven insights. Its semantic layer platform simplifies, accelerates, and extends business intelligence and data science capabilities for enterprise customers across all industries. Their goal is to empower customers to democratize data, implement self-service BI, and build a more agile analytics infrastructure for more impactful decision-making.

This is a dynamic and innovative company with a no-nonsense culture. They combine the best aspects of the Bay Area, Boston, and Bulgaria to create a unique and high-achieving team. They are driven by the ambition to make a difference while maintaining a fun and engaging work environment.

Job Description

You will be a core contributor to the design and development of cutting-edge technologies used by the world’s largest organizations for data analytics.

This position will collaborate with product designers and other technical leads to tackle complex problems in analytics computation and data management.

You will not only tackle complexities of algorithm and data structure design but also integrate with modern technologies for data warehousing (e.g. snowflake), data engineering (e.g. dbt), and analytics.

Further, you’ll be responsible for considering the architectural implications of deployment and infrastructure concerns such as cloud and container technologies.

If you are passionate about DevOps, have a strong technical background, and thrive in a fast-paced and challenging environment, we invite you to apply for the DevOps Engineer position.

Responsibilities

  • Design, build, orchestrate, and automate infrastructure, applications, and monitoring tools.
  • Implement and optimize continuous integration and deployment (CI/CD) pipelines to streamline the software development lifecycle.
  • Define requirements, estimate work, track dependencies, report progress, highlight blocker.
  • Manage and monitor cloud resources on platforms such as AWS, ensuring optimal performance and cost efficiency.
  • Troubleshoot and resolve infrastructure and deployment issues, providing timely resolutions to minimize downtime and disruptions.
  • Implement and maintain configuration management tools and practices, ensuring consistency and repeatability across environments.
  • Collaborate with network administrators to ensure the stability and security of our network infrastructure.
  • Stay updated with emerging technologies and industry trends, identifying opportunities for improvement and implementing best practices.
  • Mentor and train interns, sharing your knowledge and expertise to foster their professional growth.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related fields.
  • 3+ years of recent experience as a DevOps Engineer.
  • Experience with cloud-native development with cloud providers like AWS, MSFT Azure, or GCP (e.g. AWS – API Gateway, Lambda, SMS, ECS, EKS, etc.).
  • Experience with Linux and network administration.
  • Experience with scripting languages such as Python and Bash.
  • Experience with JavaScript.
  • Experience with Jenkins, and Nexus.
  • Experience with infrastructure-as-code and technologies such as Terraform.
  • Experience with containerization technologies like Docker.
  • Experience with Git for version control and code management.
  • Experience with CI/CD and testing.

Preference will be given to candidates with

  • Experience contributing to a production code base.
  • Experience with DevOps best practices and tools such as Github, related automation, etc.
  • Experience designing robust systems for HA, Fail Over, and Disaster Recovery.
  • Experienced in the design, configuration, and maintenance of services for metrics, logging, and monitoring of the platform using tools such as ELK, Prometheus, and Grafana.
  • Familiarity with cloud security considerations such as NACLs, Security Groups, RBAC, etc.
  • Familiarity with different types of databases and cloud-native variants (e.g. Snowflake, BigQuery, RDS, Redshift, Neptune, DynamoDB, Cosmos, Athena).
  • Experience designing and automating ETL/ELT workflows using cloud services such as AWS data pipelines, Glue, EMR, and airflow or Azure DataFactory.

Hiring Process

Candidates will undergo a test that covers various technical areas. The test will consist of multiple-choice questions, and candidates must achieve a minimum score of 60% to be considered for the role. The test will assess the following areas:

  • Linux Administration: 5 questions
  • Git: 4 questions
  • Amazon Web Services (AWS): 9 questions
  • Docker: 4 questions
  • Network Administration: 7 questions
  • Python 3: 7 questions
  • Bash: 3 questions
  • Jenkins / Nexus: 6 questions
  • Terraform: 5 questions
  • JavaScript: 7 questions

Salary Range

36,000~64,800 USD / year

Join a team of passionate people committed to redefining the way business intelligence and AI is done.

Ready to get started? Click on the button below.

You’ll be redirected to our application form, where you will choose the job that you wish to apply for. Remember the ID number (#) of the job.

Frequently Asked Questions (FAQ)

[sp_easyaccordion id="471"]
Menu