Docker, Kubernetes, Helm, AWS, Azure, GCP
The company enables smarter decision-making by accelerating the flow of data-driven insights. Its semantic layer platform simplifies, accelerates, and extends business intelligence and data science capabilities for enterprise customers across all industries. Their goal is to empower customers to democratize data, implement self-service BI, and build a more agile analytics infrastructure for more impactful decision-making.
This is a dynamic and innovative company with a no-nonsense culture. They combine the best aspects of the Bay Area, Boston, and Bulgaria to create a unique and high-achieving team. They are driven by the ambition to make a difference while maintaining a fun and engaging work environment.
You will be a core contributor to the design and development of cutting-edge technologies used by the world’s largest organizations for data analytics.
This position will collaborate with product designers and other technical leads to tackle complex problems in analytics computation and data management.
You will not only tackle complexities of algorithm and data structure design but also integrate with modern technologies for data warehousing (e.g. snowflake), data engineering (e.g. dbt), and analytics.
Further, you’ll be responsible for considering the architectural implications of deployment and infrastructure concerns such as cloud and container technologies.
If you are passionate about DevOps, have a strong technical background, and thrive in a fast-paced and challenging environment, we invite you to apply for the DevOps Engineer position.
- Design, build, orchestrate, and automate infrastructure, applications, and monitoring tools.
- Implement and optimize continuous integration and deployment (CI/CD) pipelines to streamline the software development lifecycle.
- Define requirements, estimate work, track dependencies, report progress, highlight blocker.
- Manage and monitor cloud resources on platforms such as AWS, ensuring optimal performance and cost efficiency.
- Troubleshoot and resolve infrastructure and deployment issues, providing timely resolutions to minimize downtime and disruptions.
- Implement and maintain configuration management tools and practices, ensuring consistency and repeatability across environments.
- Collaborate with network administrators to ensure the stability and security of our network infrastructure.
- Stay updated with emerging technologies and industry trends, identifying opportunities for improvement and implementing best practices.
- Mentor and train interns, sharing your knowledge and expertise to foster their professional growth.
- 3+ years of recent experience as a DevOps Engineer.
- Expertise with Docker, Kubernetes, and Helm.
- Experience with cloud-native development with cloud providers like AWS, MSFT Azure, or GCP.
- Experience with Jenkins, and Nexus.
- Experience with CI/CD and testing.
- Familiarity with Git for version control and code management.
- Familiarity with Python.
Preference will be given to candidates with
- Experience contributing to a production code base.
- Experience with DevOps best practices and tools such as Github, related automation, etc.
- Experience designing robust systems for HA, Fail Over, and Disaster Recovery.
- Experienced in the design, configuration, and maintenance of services for metrics, logging, and monitoring of the platform using tools such as ELK, Prometheus, and Grafana.
- Familiarity with cloud security considerations such as NACLs, Security Groups, RBAC, etc.
- Familiarity with different types of databases and cloud-native variants (e.g. Snowflake, BigQuery, RDS, Redshift, Neptune, DynamoDB, Cosmos, Athena).
- Experience designing and automating ETL/ELT workflows using cloud services such as AWS data pipelines, Glue, EMR, and airflow or Azure DataFactory.
36,000~64,800 USD / year
Join a team of passionate people committed to redefining the way business intelligence and AI is done.