Data Engineer Remote Jobs

107 Results

+30d

Senior Data Engineer

HandshakeRemote (USA)
sqlDesignc++docker

Handshake is hiring a Remote Senior Data Engineer

Everyone is welcome at Handshake. We know diverse teams build better products and we are committed to creating an inclusive culture built on a foundation of respect for all individuals. We strongly encourage candidates from non-traditional backgrounds, historically marginalized or underrepresented groups to apply.

Your impact

At Handshake, we are assembling a team of dynamic engineers who are passionate about creating high-quality, impactful products. As a Senior Data Engineer, you will play a key role in driving the architecture, implementation, and evolution of our cutting-edge data platform. Your technical expertise will be instrumental in helping millions of students discover meaningful careers, irrespective of their educational background, network, or financial resources.

Our primary focus is on building a robust data platform that empowers all teams to develop data-driven features while ensuring that every facet of the business has access to the right data for making informed conclusions. While this individual will work closely in collaboration with our ML teams, they will also be supporting our businesses data needs as a whole.

Your role

  • Technical leadership: Taking ownership of the data engineering function and providing technical guidance to the data engineering team. Mentoring junior data engineers, fostering a culture of learning, and promoting best practices in data engineering.
  • Collaborating with cross-functional teams: Working closely with product managers, product engineers, and other stakeholders to define data requirements, design data solutions, and deliver high-quality, data-driven features.
  • Data architecture and design: Designing and implementing scalable and robust data pipelines, data services, and data products that meet business needs and adhere to best practices. Staying abreast of emerging technologies and tools in the data engineering space, evaluating their potential impact on the data platform, and making strategic recommendations.
  • Performance optimization: Identifying performance bottlenecks in data processes and implementing solutions to enhance data processing efficiency.
  • Data quality and governance: Ensuring data integrity, reliability, and security through the implementation of data governance policies and data quality monitoring.
  • Advancing our Generative AI strategy: Leveraging your Data Engineering knowledge to design and implement data pipelines that support our Generative AI initiatives, advising and working in collaboration with our ML teams.

Your experience

  • Extensive data engineering experience: A proven track record in designing and implementing large-scale, complex data pipelines, data warehousing solutions, and data services. Deep knowledge of data engineering technologies, tools, and frameworks.
  • Cloud platform proficiency: Hands-on experience with cloud-based data technologies, preferably Google Cloud Platform (GCP), including BigQuery, DataFlow, BigTable, and more
  • Advanced SQL skills: Strong expertise in SQL and experience with data modeling and database design conventions.
  • Problem-solving abilities: Exceptional problem-solving skills, with the ability to tackle complex data engineering challenges and propose innovative solutions.
  • Collaborative mindset: A collaborative and team-oriented approach to work, with the ability to communicate effectively with both technical and non-technical stakeholders.

Bonus areas of expertise

  • Machine learning for data enrichment: Experience in applying machine learning techniques to data engineering tasks for data enrichment and augmentation.
  • End to end data service deployment, comfortable with product alignment of data-driven initiatives
  • Containerization and orchestration: Familiarity with containerization technologies like Docker and container orchestration platforms like Kubernetes.
  • dbt: Experience with dbt as a data transformation tool for orchestrating and organizing data pipelines.

Compensation range

$173,000-$213,580

For cash compensation, we set standard ranges for all U.S.-based roles based on function, level, and geographic location, benchmarked against similar stage growth companies. In order to be compliant with local legislation, as well as to provide greater transparency to candidates, we share salary ranges on all job postings regardless of desired hiring location. Final offer amounts are determined by multiple factors, including geographic location as well as candidate experience and expertise, and may vary from the amounts listed above.

About us

Handshake is the #1 place to launch a career with no connections, experience, or luck required. The platform connects up-and-coming talent with 750,000+ employers - from Fortune 500 companies like Google, Nike, and Target to thousands of public school districts, healthcare systems, and nonprofits. In 2022 we announced our $200M Series F funding round. This Series F fundraise and valuation of $3.5B will fuel Handshake’s next phase of growth and propel our mission to help more people start, restart, and jumpstart their careers.

When it comes to our workforce strategy, we’ve thought deeply about how work-life should look here at Handshake. With our Hub-Based Remote Working strategy, employees can enjoy the flexibility of remote work, whilst ensuring collaboration and team experiences in a shared space remains possible. Handshake is headquartered in San Francisco with offices in Denver, New York, London, and Berlin and teammates working globally. 

Check out our careers site to find a hub near you!

What we offer

At Handshake, we'll give you the tools to feel healthy, happy and secure.

Benefits below apply to employees in full-time positions.

  • ???? Equity and ownership in a fast-growing company.
  • ???? 16 Weeks of paid parental leave for birth giving parents & 10 weeks of paid parental leave for non-birth giving parents.
  • ???? Comprehensive medical, dental, and vision policies including LGTBQ+ Coverage. We also provide resources for Mental Health Assistance, Employee Assistance Programs and counseling support.
  • ???? Handshake offers $500/£360 home office stipend for you to spend during your first 3 months to create a productive and comfortable workspace at home.
  • ???? Generous learning & development opportunities and an annual $2,000/£1,500/€1,850 stipend for you to grow your skills and career.
  • ???? Financial coaching through Origin to help you through your financial journey.
  • ???? Monthly internet stipend and a brand new MacBook to allow you to do your best work.
  • ???? Monthly commuter stipend for you to expense your travel to the office (for office-based employees).
  • ???? Free lunch provided twice a week across all offices.
  • ???? Referral bonus to reward you when you bring great talent to Handshake.

(US-specific benefits, in addition to the first section)

  • ???? 401k Match: Handshake offers a dollar-for-dollar match on 1% of deferred salary, up to a maximum of $1,200 per year.
  • ???? All full-time US-based Handshakers are eligible for our flexible time off policy to get out and see the world. In addition, we offer 8 standardized holidays, and 2 additional days of flexible holiday time off. Lastly, we have a Winter #ShakeBreak, a one-week period of Collective Time Off.
  • ???? Lactation support: Handshake partners with Milk Stork to provide a comprehensive 100% employer-sponsored lactation support to traveling parents and guardians.

(UK-specific benefits, in addition to the first section) 

  • ???? Pension Scheme: Handshake will provide you with a workplace pension, where you will make contributions based on 5% of your salary. Handshake will pay the equivalent of 3% towards your pension plan, subject to qualifying earnings limits.
  • ???? Up to 25 days of vacation to encourage people to reset, recharge, and refresh, in addition to 8 bank holidays throughout the year.
  • ???? Regular offsites each year to bring the team together + opportunity to travel to our HQ in San Francisco.
  • ????️ Discounts across various high street retailers, cinemas and other social activities exclusively for Handshake UK employees.

(Germany-specific benefits, in addition to the first section)

  • ???? 25 days of annual leave + we have a Winter #ShakeBreak, a one-week period of Collective Time Off across the company.
  • ???? Regular offsites each year to bring the team together + opportunity to travel to our HQ in San Francisco once a year.
  • ???? Urban sports club membership offering access to a diverse network of fitness and wellness facilities.
  • ????️ Discounts across various high street retailers, cinemas and other social activities exclusively for Handshake Germany employees.

For roles based in Romania: Please ask your recruiter about region specific benefits.

Looking for more? Explore our mission, values and comprehensive US benefits at joinhandshake.com/careers.

Handshake is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or reasonable accommodation, please reach out to us at people-hr@joinhandshake.com.

See more jobs at Handshake

Apply for this job

+30d

Staff Data Engineer

Procore TechnologiesBangalore, India, Remote
scalaairflowsqlDesignUXjavakubernetespython

Procore Technologies is hiring a Remote Staff Data Engineer

Job Description

We’re looking for a Staff Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Staff Data Engineer, you’ll partner with other engineers and product managers across Product & Technology to develop data platform capabilities that enable the movement, transformation, and retrieval of data for use in analytics, machine learning, and service integration. To be successful in this role, you’re passionate about distributed systems including storage, streaming, and batch data processing technologies on the cloud, with a strong bias for action and outcomes. If you’re a seasoned data engineer comfortable and excited about building our next-generation data platform and translating problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This is a full-time position and will report to our Senior Manager of Software Engineering and will be based in the India office, but employees can choose to work remotely. We are looking for someone to join our team immediately.

What you’ll do: 

  • Participate in the design and implementation of our next-generation data platform for the construction industry
  • Define and implement operational and dimensional data models and transformation pipelines to support reporting and analytics
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Understand our current data models and infrastructure, proactively identify areas for improvement, and prescribe architectural recommendations with a focus on performance and accessibility. 
  • Work alongside our Product, UX, and IT teams, leveraging your expertise in the data space to influence our product roadmap, developing innovative solutions that add additional value to our platform
  • Help uplevel teammates by conducting code reviews, providing mentorship, pairing, and training opportunities
  • Stay up to date with the latest data technology trends

What we’re looking for: 

  • Bachelor’s Degree in Computer Science or a related field is preferred, or comparable work experience 
  • 8+ years of experience building and operating cloud-based, highly available, and scalable data platforms and pipelines supporting vast amounts of data for reporting and analytics
  • 2+ years of experience building data warehouses in Snowflake or Redshift
  • Hands-on experience with MPP query engines like Snowflake, Presto, Dremio, and Spark SQL
  • Expertise in relational, dimensional data modeling.
  • Understanding of data access patterns, streaming technology, data validation, performance optimization, and cost optimization
  • Strength in commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Flink, Airflow, Kubernetes, or similar
  • Strong passion for learning, always open to new technologies and ideas

Qualifications

See more jobs at Procore Technologies

Apply for this job

+30d

Principal Data Engineer

Procore TechnologiesBangalore, India, Remote
scalanosqlairflowDesignazureUXjavadockerpostgresqlkubernetesjenkinspythonAWS

Procore Technologies is hiring a Remote Principal Data Engineer

Job Description

We’re looking for a Principal Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

As a Principal Data Engineer, you’ll use your expert-level technical skills to craft innovative solutions while influencing and mentoring other senior technical leaders. To be successful in this role, you’re passionate about distributed systems, including caching, streaming, and indexing technologies on the cloud, with a strong bias for action and outcomes. If you’re an inspirational leader comfortable translating vague problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

This position reports to the Senior Manager, Reporting and Analytics. This position can be based in our Bangalore, Pune, office or work remotely from a India location. We’re looking for someone to join us immediately.

What you’ll do: 

  • Design and build the next-generation data platform for the construction industry
  • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
  • Contribute to setting standards and development principles across multiple teams and the larger organization
  • Stay connected with other architectural initiatives and craft a data platform architecture that supports and drives our overall platform
  • Provide technical leadership to efforts around building a robust and scalable data pipeline to support billions of events
  • Help identify and propose solutions for technical and organizational gaps in our data pipeline by running proof of concepts and experiments working with Data Platform Engineers on implementation
  • Work alongside our Product, UX, and IT teams, leveraging your experience and expertise in the data space to influence our product roadmap, developing innovative solutions that add additional capabilities to our tools

What we’re looking for: 

  • Bachelor’s degree in Computer Science, a similar technical field of study, or equivalent practical experience is required; MS or Ph.D. degree in Computer Science or a related field is preferred
  • 10+ years of experience building and operating cloud-based, highly available, and scalable online serving or streaming systems utilizing large, diverse data sets in production
  • Expertise with diverse data technologies like Databricks, PostgreSQL, GraphDB, NoSQL DB, Mongo, Cassandra, Elastic Search, Snowflake, etc.
  • Strength in the majority of commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Airflow, Kubernetes, Docker, Argo, Jenkins, or similar
  • Expertise with all aspects of data systems, including ETL, aggregation strategy, performance optimization, and technology trade-off
  • Understanding of data access patterns, streaming technology, data validation, data modeling, data performance, cost optimization
  • Experience defining data engineering/architecture best practices at a department and organizational level and establishing standards for operational excellence and code and data quality at a multi-project level
  • Strong passion for learning, always open to new technologies and ideas
  • AWS and Azure experience is preferred

Qualifications

See more jobs at Procore Technologies

Apply for this job

+30d

Data Engineer PySpark AWS

2 years of experienceagileBachelor's degreejiraterraformscalaairflowpostgressqloracleDesignmongodbjavamysqljenkinspythonAWS

FuseMachines is hiring a Remote Data Engineer PySpark AWS

Data Engineer PySpark AWS - Fusemachines - Career PageSee more jobs at FuseMachines

Apply for this job

+30d

Senior Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Senior Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant son expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur GCP, en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des requêtes SQL et des processus ETL pour garantir des temps de réponse rapides et une scalabilité.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Restez à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

Qualifications

  • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
  • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
  • Certification GCP (Google Cloud Platform) est un plus.
  • Très bonne communication écrite et orale (livrables et reportings de qualité)

See more jobs at Devoteam

Apply for this job

+30d

Senior Data Engineer (UK REMOTE)

Turnitin LLCLondon, United Kingdom, Remote
4 years of experienceDesignazurejavaelasticsearchpythonAWS

Turnitin LLC is hiring a Remote Senior Data Engineer (UK REMOTE)

Job Description

Your role as a Senior Data Engineer entails a range of responsibilities, necessitating a balanced skillset:

  • AI Data Engineering: Design, build, operate and deploy real-time data pipelines at scale using AI techniques and best practices. Support Turnitin's AI R&D efforts by applying advanced data warehousing, data science, and data engineering technologies. Aim for automation to enable a faster time-to-market and better reusability of new AI initiatives.
  • Collaboration: Work in tandem with the AI R&D teams and the Data Platform Team to collect, create, curate and maintain high-quality AI datasets. Ensure alignment of data architecture and data models across different products and platforms.
  • Innovation: Unearth insights from Turnitin's rich data resources through innovative research and development.
  • Hands-on Involvement: Engage in data engineering and data science tasks as required to support the team and the projects. Conduct and own external data collection efforts - including state of the art prompt engineering techniques - to support the construction of state of the art AI models.
  • Communication: Foster clear communication within the team and the organization, and ensure understanding of the company's vision and mission.
  • Continuous Learning: Keep abreast of new tools and development strategies, bringing innovative recommendations to leadership.

Qualifications

  • At least 4 years of experience in data engineering, ideally focused on enabling and accelerating AI R&D.
  • Strong proficiency in Python, Java, and SQL.
  • Proficiency with Redshift, Hadoop, Elasticsearch, and cloud platforms (AWS, Azure, GCP).
  • Familiarity interacting with AI frameworks including PyTorch and TensorFlow and AI libraries such as Huggingface and Scikit-Learn.
  • Experience with Large Language Models (LLMs) and LLM APIs.
  • Strong problem-solving, analytical, and communication skills, along with the ability to thrive in a fast-paced, collaborative environment.

Desired Qualifications

  • 6+ years of experience in data engineering with a focus on AI and machine learning projects.
  • Experience in a technical leadership role.
  • Familiarity with natural language processing (NLP) techniques and tools.
  • Experience in the education or education technology sectors.
  • Experience with data visualization and data communications.

Characteristics for Success

  • As a Senior Data Engineer, you should possess:
  • A passion for creatively solving complex data problems.
  • The ability to work collaboratively and cross-functionally.
  • A continuous learning mindset, always striving to improve your skills and knowledge.
  • A proven track record of delivering results and ensuring a high level of quality.
  • Strong written and verbal communication skills.
  • Curiosity about the problems at hand, the field at large, and the best solutions.
  • Strong system-level problem-solving skills.

Apply for this job

+30d

Data Engineer

phDataLATAM - Remote
scalasqlazurejavapythonAWS

phData is hiring a Remote Data Engineer

Job Application for Data Engineer at phData

See more jobs at phData

Apply for this job

+30d

Senior Data Engineer

SamsaraRemote - US
agilesqloracleDesignazureapidockerpostgresqlmysqlkubernetespythonAWSbackend

Samsara is hiring a Remote Senior Data Engineer

Who we are

Samsara (NYSE: IOT) is the pioneer of the Connected Operations™ Cloud, which is a platform that enables organizations that depend on physical operations to harness Internet of Things (IoT) data to develop actionable insights and improve their operations. At Samsara, we are helping improve the safety, efficiency and sustainability of the physical operations that power our global economy. Representing more than 40% of global GDP, these industries are the infrastructure of our planet, including agriculture, construction, field services, transportation, and manufacturing — and we are excited to help digitally transform their operations at scale.

Working at Samsara means you’ll help define the future of physical operations and be on a team that’s shaping an exciting array of product solutions, including Video-Based Safety, Vehicle Telematics, Apps and Driver Workflows, Equipment Monitoring, and Site Visibility. As part of a recently public company, you’ll have the autonomy and support to make an impact as we build for the long term. 

Recent awards we’ve won include:

Glassdoor's Best Places to Work 2024

Best Places to Work by Built In 2024

Great Place To Work Certified™ 2023

Fast Company's Best Workplaces for Innovators 2023

Financial Times The Americas’ Fastest Growing Companies 2023

We see a profound opportunity for data to improve the safety, efficiency, and sustainability of operations, and hope you consider joining us on this exciting journey. 

Click hereto learn more about Samsara's cultural philosophy.

About the role:

Data and Analytics is a critical team within Business Technology. Our mission is to enable integrated data layers for all of Samsara and Samsara customers with the insights, tools, infrastructure and consultation to make data driven decisions. We are a growing team that loves all things data! The team will be composed of data engineers, architects, analysts and data scientists. We are passionate about leveraging world class data and analytics to deliver a great customer experience.  

Our team promotes an agile, collaborative, supportive environment where diverse thinking, innovative design, and experimentation is welcomed and encouraged.

You should apply if:

  • You want to impact the industries that run our world: Your efforts will result in real-world impact—helping to keep the lights on, get food into grocery stores, reduce emissions, and most importantly, ensure workers return home safely.
  • You are the architect of your own career: If you put in the work, this role won’t be your last at Samsara. We set up our employees for success and have built a culture that encourages rapid career development, countless opportunities to experiment and master your craft in a hyper growth environment.
  • You’re energized by our opportunity: The vision we have to digitize large sectors of the global economy requires your full focus and best efforts to bring forth creative, ambitious ideas for our customers.
  • You want to be with the best: At Samsara, we win together, celebrate together and support each other. You will be surrounded by a high-caliber team that will encourage you to do your best. 

Click hereto learn about what we value at Samsara. 

In this role, you will:

  • Develop and maintain E2E data pipelines, backend ingestion and participate in the build of Samsara’s Data Platform to enable advanced automation and analytics.
  • Work with data from a variety of sources including but not limited to: CRM data, Product data, Marketing data, Order flow data, Support ticket volume data.
  • Manage critical data pipelines to enable our growth initiatives and advanced analytics.
  • Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with data layers and data lake.
  • Develop and improve the current data architecture, data quality, monitoring, observability and data availability.
  • Write data transformations in SQL/Python to generate data products consumed by customer systems and Analytics, Marketing Operations, Sales Operations teams.
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices. 

Minimum requirements for the role:

  • A Bachelor’s degree in computer science, data engineering, data science, information technology, or equivalent engineering program.
  • 5+ years of work experience as a data engineer, including 3+ years of experience in designing, developing, testing, and maintaining E2E data pipelines.. 
  • Experience with modern cloud-based data-lake and data-warehousing technology stacks, and familiarity with typical data-engineering tools, ETL/ELT, and data-warehousing processes and best practices.
  • Experience with the following:
    • Languages: Python, SQL.
    • Exposure to ETL tools such as Fivetran, DBT or equivalent.
    • API: Exposure to python based API frameworks for data pipelines. 
    • RDBMS: MySQL, AWS RDS/Aurora MySQL, PostgreSQL, Oracle, MS SQL-Server or equivalent.
    • Cloud: AWS, Azure and/or GCP.
    • Data warehouse: Databricks, Google Big Query, AWS Redshift, Snowflake or equivalent.

An ideal candidate has:

  • Comfortable in working with business customers to gather requirements and gain a deep understanding of varied datasets.
  • A self-starter, motivated, responsible, innovative and technology-driven person who performs well both solo and as a team member.
  • A proactive problem solver and have good communication as well as project management skills to relay your findings and solutions across technical and non technical audiences.
  • ETL and Orchestration Experience.
  • Fivetran, Alteryx or equivalent.
  • DBT or equivalent.
  • Logging and Monitoring: One or more of Splunk, DataDog, AWS Cloudwatch or equivalent.
  • AWS Serverless: AWS API Gateway, Lambda, S3, SNS, SQS, SecretsManager.
  • Other: Docker, Kubernetes, AWS ECR, AWS Fargate, AWS IAM.

Samsara’s Compensation Philosophy:Samsara’s compensation program is designed to deliver Total Direct Compensation (based on role, level, and geography) that is at or above market. We do this through our base salary + bonus/variable + restricted stock unit awards (RSUs) for eligible roles.  For eligible roles, a new hire RSU award may be awarded at the time of hire, and additional RSU refresh grants may be awarded annually. 

We pay for performance, and top performers in eligible roles may receive above-market equity refresh awards which allow employees to achieve higher market positioning.

The range of annual base salary for full-time employees for this position is below. Please note that base pay offered may vary depending on factors including your city of residence, job-related knowledge, skills, and experience.
$112,455$189,000 USD

At Samsara, we welcome everyone regardless of their background. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender, gender identity, sexual orientation, protected veteran status, disability, age, and other characteristics protected by law. We depend on the unique approaches of our team members to help us solve complex problems. We are committed to increasing diversity across our team and ensuring that Samsara is a place where people from all backgrounds can make an impact.

Benefits

Full time employees receive a competitive total compensation package along with employee-led remote and flexible working, health benefits, Samsara for Good charity fund, and much, much more. Take a look at our Benefits site to learn more.

Accommodations 

Samsara is an inclusive work environment, and we are committed to ensuring equal opportunity in employment for qualified persons with disabilities. Please email accessibleinterviewing@samsara.com or click hereif you require any reasonable accommodations throughout the recruiting process.

Flexible Working 

At Samsara, we embrace a flexible working model that caters to the diverse needs of our teams. Our offices are open for those who prefer to work in-person and we also support remote work where it aligns with our operational requirements. For certain positions, being close to one of our offices or within a specific geographic area is important to facilitate collaboration, access to resources, or alignment with our service regions. In these cases, the job description will clearly indicate any working location requirements. Our goal is to ensure that all members of our team can contribute effectively, whether they are working on-site, in a hybrid model, or fully remotely. All offers of employment are contingent upon an individual’s ability to secure and maintain the legal right to work at the company and in the specified work location, if applicable.

Fraudulent Employment Offers

Samsara is aware of scams involving fake job interviews and offers. Please know we do not charge fees to applicants at any stage of the hiring process. Official communication about your application will only come from emails ending in ‘@samsara.com’ or ‘@us-greenhouse-mail.io’. For more information regarding fraudulent employment offers, please visit our blog post here.

Apply for this job

+30d

Senior Data Engineer

InstacartCanada - Remote
airflowsqlDesign

Instacart is hiring a Remote Senior Data Engineer

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

Overview

 

At Instacart, our mission is to create a world where everyone has access to the food they love and more time to enjoy it together. Millions of customers every year use Instacart to buy their groceries online, and the Data Engineering team is building the critical data pipelines that underpin all of the myriad of ways that data is used across Instacart to support our customers and partners.

About the Role 

 

The Finance data engineering team plays a critical role in defining how financial data is modeled and standardized for uniform, reliable, timely and accurate reporting. This is a high impact, high visibility role owning critical data integration pipelines and models across all of Instacart’s products. This role is an exciting opportunity to join a key team shaping the post-IPO financial data vision and roadmap for the company.

 

About the Team 

 

Finance data engineering is part of the Infrastructure Engineering pillar, working closely with accounting, billing & revenue teams to support the monthly/quarterly book close, retailer invoicing and internal/external financial reporting. Our team collaborates closely with product teams to capture critical data needed for financial use cases.

 

About the Job 

  • You will be part of a team with a large amount of ownership and autonomy.
  • Large scope for company-level impact working on financial data.
  • You will work closely with engineers and both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.
  • You will ship high quality, scalable and robust solutions with a sense of urgency.
  • You will have the freedom to suggest and drive organization-wide initiatives.

 

About You

Minimum Qualifications

  • 6+ years of working experience in a Data/Software Engineering role, with a focus on building data pipelines.
  • Expert with SQL and  knowledge of Python.
  • Experience building high quality ETL/ELT pipelines.
  • Past experience with data immutability, auditability, slowly changing dimensions or similar concepts.
  • Experience building data pipelines for accounting/billing purposes.
  • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar.
  • Adept at fluently communicating with many cross-functional stakeholders to drive requirements and design shared datasets.
  • A strong sense of ownership, and an ability to balance a sense of urgency with shipping high quality and pragmatic solutions.
  • Experience working with a large codebase on a cross functional team.

 

Preferred Qualifications

  • Bachelor’s degree in Computer Science, computer engineering, electrical engineering OR equivalent work experience.
  • Experience with Snowflake, dbt (data build tool) and Airflow
  • Experience with data quality monitoring/observability, either using custom frameworks or tools like Great Expectations, Monte Carlo etc

 

#LI-Remote

See more jobs at Instacart

Apply for this job

+30d

Data Analytics Engineer

seedtagSpain Remote
tableausqlDesignazureAWS

seedtag is hiring a Remote Data Analytics Engineer

We are offering a Data Analytics Engineerposition, to build the global contextual advertising leader.

WHO WE ARE

Seedtag is the leading Contextual Advertising Platform. Our proprietary, machine learning-based technology provides human-like understanding of the content in the web, the highest level of brand safety in the industry and unmatched cookieless targeting capabilities.

We engage with the market on both demand and supply side to create, activate and launch high-quality advertising campaigns at scale. We are committed to creating a more beautiful, respectful and engaging way to do advertising.

KEY FIGURES

2014 · Founded by two ex-Googlers

2021 · Fundraising round of 40M€ & +10 countries & +230 Seedtaggers

2022 · Fundraising round of 250M€ + expansion into the U.S market

2023 · Expansion into 15 countries + 500 Seedtaggers

YOUR CHALLENGE

  • Design and implement robust data models to support complex ad campaign measurement and analysis
  • Leverage your expertise in data warehouse architectures (e.g., Snowflake, Redshift) to ensure efficient data storage, retrieval, and scalability
  • Utilize dbt (Data Build Tool) to automate data transformation processes and maintain data quality
  • Develop and maintain the semantic layer, acting as a single source of truth for business users to access and understand campaign data
  • Collaborate with data scientists, analysts, and business stakeholders to translate business needs into technical requirements
  • Build and maintain data pipelines to ingest data from various sources (ad servers, DSPs, etc.)
  • Monitor and troubleshoot data quality issues, ensuring the accuracy and consistency of information
  • Create and maintain comprehensive data documentation

YOU WILL SUCCEED IN THIS ROLE IF

  • Minimum 3+ years of experience as an Analytics Engineer or similar data-focused role
  • Strong understanding of data modeling concepts and principles
  • Proven experience designing and implementing data warehouse solutions
  • Proficiency in SQL and experience working with relational databases
  • In-depth knowledge of dbt and its capabilities in data transformation
  • Familiarity with the concept of a semantic layer and its importance in data analysis
  • Excellent communication and collaboration skills
  • Ability to work independently and manage multiple projects simultaneously

It would be a big plus if:

  • Experience working in the ad tech industry
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure)
  • Experience with data visualization tools (e.g., Tableau, Power BI, Looker)

SEEDTAG PERKS

???? Key moment to join Seedtag in terms of growth and opportunities

???? One Seedtag: Work for a month from any of our open offices with travel and stay paid.

???? Build your home office with a gross budget of up to 1K€ (external screen, chair, table...)

???????????????????????? Optional company-paid English, Spanish and/or French courses

???? Odilo online school, where you can learn as much as you want

???? We love what we do, but we also love having fun. We have many team activities you can join and enjoy with your colleagues!

BENEFITS OF WORKING AT SEEDTAG

  • Growth: International, highly demanding work environment in one of the fastest growing AdTech companies in Europe. We reject "that’s the way it’s always been done". In Seedtag you can find an energetic, fresh workplace, multicultural work environment where our members are from different countries in Europe, LATAM, US and so many more!
  • Impact: The chance to have a direct impact, here you don't work for the sake of working, we all have an impact on Seedtag in our own way, rowing in the same direction
  • Diversity of methodology and people: Seedtag DNA is unique and highly appreciated by very different types of Seedtagers. We embrace diversity and encourage everyone to seek the best version of themselves and to show who they really are. With a totally flexible methodology
  • Flexibility:At Seedtag, we trust you, you can work from home, the beach or the office in our hybrid mode

Are you ready to join the Seedtag adventure? Then send us your CV!

See more jobs at seedtag

Apply for this job

+30d

Senior Data Engineer

BloomreachRemote CEE, Czechia, Slovakia
redisremote-firstc++kubernetespython

Bloomreach is hiring a Remote Senior Data Engineer

Bloomreach is the world’s #1 Commerce Experience Cloud, empowering brands to deliver customer journeys so personalized, they feel like magic. It offers a suite of products that drive true personalization and digital commerce growth, including:

  • Discovery, offering AI-driven search and merchandising
  • Content, offering a headless CMS
  • Engagement, offering a leading CDP and marketing automation solutions

Together, these solutions combine the power of unified customer and product data with the speed and scale of AI optimization, enabling revenue-driving digital commerce experiences that convert on any channel and every journey. Bloomreach serves over 850 global brands including Albertsons, Bosch, Puma, FC Bayern München, and Marks & Spencer. Bloomreach recently raised $175 million in a Series F funding round, bringing its total valuation to $2.2 billion. The investment was led by Goldman Sachs Asset Management with participation from Bain Capital Ventures and Sixth Street Growth. For more information, visit Bloomreach.com.

 

We want you to join us as a full-timeSenior Data Engineer into our Data Pipelineteam. We work remotely first, but we are more than happy to meet you in our nice office in Bratislava or Brno. And if you are interested in who will be your engineering manager, check out Vaclav's Linkedin.

Intrigued? Read on ????…

Your responsibilities

  • You will develop and maintain Data LakeHouse on top of GCP platform using Apache IceBerg, BigQuery, BigLake tables, DataPlex and DataProc in the form ofApacheSpark/Flink with open file formats like AVRO and Parquet
  • You will help to maintain a streaming mechanism on how data from Apache Kafka gets into the Data LakeHouse
  • You will optimise the Data LakeHouse for near-real-time and non-real-time analytical use-cases primarily for customer activation and scenarios/campaign evaluation 
  • You should help with areas like data discovery and managed access to data through the data governance layer and data catalog using DataPlex so our engineering teams can leverage from this unified Data LakeHouse
  • You feel responsible for DataModeling and schema evolution
  • You should help us with adopting the concepts from Data Fabrics and Data Mesh to run data as a product to unlock the potential the data can unleash for our clients
  • You should bring expertise into the team from similar previous projects to influence how we adopt and evolve the concepts mentioned above and as an addition to that to topics like Zero-copy or reverse ETL to increase the ease of integration with client’s platforms
  • You will also help to maintain the existing data exports to Google’s BigQuery using google’s DataFlows and Apache Beam 
  • You will help us run and support our services in production handling high-volume traffic using Google Cloud Platform and Kubernetes.
  • You will review the code of your peers and they'll review yours. We have high code quality standards and the four-eyes principle is a must!

Your qualifications

  • You have production experience with building and operating a DataLake, Data Warehouse or Data LakeHouses
  • You have a taste for big data streaming, storage and processing using open source technologies
  • You can demonstrate your understanding of what it means to treat data as a product
  • You know what are Data Mashes and Data Fabrics and what is critical to make sure for them to bring value
  • You are able to learn and adapt. It'll be handy while exploring new tech, navigating our not-so-small code base, or when iterating on our team processes.
  • You knowdata structures,you knowPython and (optionaly) Go.

Our tech stack

  • Google Cloud Platform, DataFlow, Apache Beam, BigQuery, BigLake Table
  • Open formats IceBerg, Avro, Parquet
  • DataProc, Spark, Flink, Presto
  • Python, GO
  • Apache Kafka, Kubernetes, GitLab
  • BigTable, Mongo, Redis
  • … and much more ????

Compensations

  • Salary range starting from 3500 EUR gross per month,going up depending on your experience and skills
  • There's a bonus based on company performance and your salary.
  • You will be entitled to restricted stock options ????that will truly make you a part of Bloomreach.
  • You can spend 1500 USD per year on the education of your choice (books, conferences, courses, ...).
  • You can count on free access to Udemy courses.
  • We have 4 company-wide disconnect days throughout the year during which you will be encouraged not to work and spend a day with your friends and family "disconnected".
  • You will have extra 5 days of paid vacation????. Extra days off for extra work-life balance ????.
  • Food allowance!
  • Sweet referral bonus up to 3000 USD based on the position.

Your success story.

  • During the first 30 days, you will get to know the team, the company, and the most important processes. You’ll work on yourfirst tasks. We will help you to get familiar with our codebase and our product.
  • During the first 90 days, you will participate in yourfirst, more complex projects. You will help the team to find solutions to various problems, break the solution down into smaller tasks and participate in implementation. You will learn how we identify problems, how we prioritize our efforts, and how we deliver value to our customers.
  • During the first 180 days, you’ll become an integral part of the team. You will achieve the first goals we will set together to help you grow and explore new and interesting things. You will help us to deliver multi-milestone projects bringing great value to our customers. You will help us mitigate your first incidents and eventually even join the on-call rotation. You will get a sense of where the team is heading and you’ll help us toshape our future.
  • Finally, you’ll find out that our values are truly lived by us ????. We are dreamers and builders. Join us!

 

More things you'll like about Bloomreach:

Culture:

  • A great deal of freedom and trust. At Bloomreach we don’t clock in and out, and we have neither corporate rules nor long approval processes. This freedom goes hand in hand with responsibility. We are interested in results from day one. 

  • We have defined our5 valuesand the 10 underlying key behaviors that we strongly believe in. We can only succeed if everyone lives these behaviors day to day. We've embedded them in our processes like recruitment, onboarding, feedback, personal development, performance review and internal communication. 

  • We believe in flexible working hours to accommodate your working style.

  • We work remote-first with several Bloomreach Hubs available across three continents.

  • We organize company events to experience the global spirit of the company and get excited about what's ahead.

  • We encourage and support our employees to engage in volunteering activities - every Bloomreacher can take 5 paid days off to volunteer*.
  • TheBloomreach Glassdoor pageelaborates on our stellar 4.6/5 rating. The Bloomreach Comparably page Culture score is even higher at 4.9/5

Personal Development:

  • We have a People Development Program -- participating in personal development workshops on various topics run by experts from inside the company. We are continuously developing & updating competency maps for select functions.

  • Our resident communication coachIvo Večeřais available to help navigate work-related communications & decision-making challenges.*
  • Our managers are strongly encouraged to participate in the Leader Development Program to develop in the areas we consider essential for any leader. The program includes regular comprehensive feedback, consultations with a coach and follow-up check-ins.

  • Bloomreachers utilize the $1,500 professional education budget on an annual basis to purchase education products (books, courses, certifications, etc.)*

Well-being:

  • The Employee Assistance Program -- with counselors -- is available for non-work-related challenges.*

  • Subscription to Calm - sleep and meditation app.*

  • We organize ‘DisConnect’ days where Bloomreachers globally enjoy one additional day off each quarter, allowing us to unwind together and focus on activities away from the screen with our loved ones.

  • We facilitate sports, yoga, and meditation opportunities for each other.

  • Extended parental leave up to 26 calendar weeks for Primary Caregivers.*

Compensation:

  • Restricted Stock Units or Stock Options are granted depending on a team member’s role, seniority, and location.*

  • Everyone gets to participate in the company's success through the company performance bonus.*

  • We offer an employee referral bonus of up to $3,000 paid out immediately after the new hire starts.

  • We celebrate work anniversaries -- Bloomversaries!*

(*Subject to employment type. Interns are exempt from marked benefits, usually for the first 6 months.)

If this position doesn't suit you, but you know someone who might be a great fit, share it - we will be very grateful!


Any unsolicited resumes/candidate profiles submitted through our website or to personal email accounts of employees of Bloomreach are considered property of Bloomreach and are not subject to payment of agency fees.

 #LI-Remote

See more jobs at Bloomreach

Apply for this job

+30d

Senior Data Engineer

scalanosqlsqlDesignazurepython

K2 Integrity is hiring a Remote Senior Data Engineer

Senior Data Engineer - K2 Integrity - Career Page (function(d, s, id) { var js, iajs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)){return;}

See more jobs at K2 Integrity

Apply for this job

+30d

Lead Data Integration Engineer

O'Reilly MediaRemote, United States
4 years of experienceagilepostgressqlRabbitMQDesignazuredockerkubernetesjenkinspythonAWS

O'Reilly Media is hiring a Remote Lead Data Integration Engineer

Description

About O’Reilly Media
              
O’Reilly’s mission is to change the world by sharing the knowledge of innovators. For over 45 years, we’ve inspired companies and individuals to do new things—and do things better—by providing them with the skills and understanding that’s necessary for success.
                  
At the heart of our business is a unique network of experts and innovators who share their knowledge through us. O’Reilly Learning offers exclusive live training, interactive learning, a certification experience, books, videos, and more, making it easier for our customers to develop the expertise they need to get ahead. And our books have been heralded for decades as the definitive place to learn about the technologies that are shaping the future. Everything we do is to help professionals from a variety of fields learn best practices and discover emerging trends that will shape the future of the tech industry.          
 
Our customers are hungry to build the innovations that propel the world forward. And we help you do just that.
              
Learn more: https://www.oreilly.com/about/      
                    
Diversity        
   
At O’Reilly, we believe that true innovation depends on hearing from, and listening to, people with a variety of perspectives. We want our whole organization to recognize, include, and encourage people of all races, ethnicities, genders, ages, abilities, religions, sexual orientations, and professional roles.       
                    
About the Team         
      
Our data platform team is dedicated to establishing a robust data infrastructure, facilitating easy access to quality, reliable, and timely data for reporting, analytics, and actionable insights. We focus on designing and building a sustainable and scalable data architecture, treating data as a core corporate asset. Our efforts also include process improvement, governance enhancement, and addressing application, functional, and reporting needs. We value teammates who are helpful, respectful, communicate openly, and prioritize the best interests of our users. Operating across various cities and time zones in the US, our team fosters collaboration to deliver work that brings pride and fulfillment.        
      
About the Job          
      
We are seeking a skilled and thoughtful Lead Data Integration Engineer to contribute to the design and development of a modern data platform. The ideal candidate will possess a deep understanding of modern data platform concepts, will develop and support data integration strategies that aligns with the organization goals. The candidate will work hand in hand with the data architect and lead team members. Responsibilities include overseeing implementation of data framework covering data integration services such as profiling, ingestion, transformation, quality, and data operations management.        
                   
The Lead Data Integration Engineer will be comfortable with building software that interacts with a diverse range of data. Additionally, the Lead Data Integration Engineer will create tools for delivering analytics data within O’Reilly, aiding decision-making, and enhancing product features. These tools encompass RESTful web services, custom analytics dashboards, and data visualization.                            

Our ETL platform primarily uses BigQuery, Pub/Sub, Talend, Python, and PostgreSQL. We develop and support RESTful web applications in Django, Redshift, Hadoop, and Spark for higher volume data ETL and analysis. Containerization is integral to our approach, employing Docker, Jenkins, and Kubernetes for building, deploying, and managing a diverse range of services. As part of our ongoing initiatives, we are migrating our data platform and services to the cloud GCP environment. The candidate will oversee legacy and new data platform initiatives.          

                 
Salary Range: $155,000-$170,000          
 
What You'll Do   
               
The Lead Data Integration Engineer will: 
  • Develop and Implement data integration strategies aligned with organization’s goals and objectives   
  • Identify opportunities to streamline data workflows and enhance data quality        
  • Oversee the integration of various data sources and ensure seamless data flow across different systems within the organization     
  • Collaborate with the Architect and enforce ETL best practices and oversee code reviews of the team
  • Collaborate with cross-functional teams, including data engineers, data analysts and business stakeholders, to understand data requirements and deliver integrated solutions that meet business needs        
  • Monitor and Optimize data integration processes for performance, scalability and efficiency      
  • Lead a team of data integration and data support professionals. Provide guidance, set priorities, mentor and support team members to ensure successful project delivery            
  • Oversee and maintain documentation for data integration processes, including data mappings, transformations and data lineage     
What You'll Have
                
Required:      
  • Bachelor’s degree in Computer Science or related field              
  • In lieu of degree, equivalent education and/or experience may be considered
  • 4 years of experience leading teams in data warehousing and/or data engineering
  • Proven experience in data integration, ETL development and data warehousing             
  • Strong technical skills in GCP, Talend, BigQuery             
  • Deep understanding of SQL, Shell scripting, and Python             
  • Experience with Agile software development lifecycle              
  • Experience with Django, Pub/Sub, Hadoop, Spark and Kubernetes             
  • Knowledge on Postgres, Redshift, RabbitMQ, Jenkins and Docker
  • Knowledge of BI tools such as Qlik Sense or Looker
  • Knowledge of data governance principles and regulatory requirements             
  • A high level of comfort with DevOps processes
  • Excellent leadership and communication skills
  • Ability to work effectively in a fast-paced, dynamic environment     
Preferred:         
  • Experience with BI tools such as Qlik Sense or Looker
  • Relevant certifications in data integration (ex. Talend Data Integration) and cloud technologies (ex. GCP, AWS, Azure) are a plus

See more jobs at O'Reilly Media

Apply for this job

+30d

Sr Data Engineer

DatabricksRemote - India
azureAWS

Databricks is hiring a Remote Sr Data Engineer

Job Application for Sr Data Engineer at Databricks

See more jobs at Databricks

Apply for this job

+30d

Senior Data Engineer

RenaissanceRemote, REMOTE, Remote
agilepostgressqlDesigndockermysqlAWSbackend

Renaissance is hiring a Remote Senior Data Engineer

Job Description

Nearpod is a classroom engagement tool at the center of solving one of the world’s largest challenges. Our platform enables teachers to keep learning effective and fun. Teachers are empowered to engage and assess students with in-lesson activities, whether it be virtual or in the classroom. We are proud of the enormous impact we are having on the world of education, while experiencing the business success that goes with it.

Position Overview: We are seeking a mature data engineer to join our core platform team. You can expect to introduce new coding patterns, features, and work to reduce complexity of data ownership for our many teams. As an integral part of our growing team, you will have an impact on technology choices, architecture, and the capabilities of our platform. You will influence our development philosophy and practices while continuing to scale Nearpod. You’ll work closely with product managers to develop innovative solutions that enable great classroom learning experiences.

At Nearpod, we value teamwork and pragmatism. You’ll find success here if:

  • You enjoy writing high quality code and know your job is not just about shipping. You are excited to contribute your ideas to the broader team in order to build great products and great teams.
  • You know some problems require introducing new technology.
  • You also understand that the foundation of any great product is proven and well-understood solutions.

Our Ideal Candidate will:

  • Craft fault-tolerant data pipelines and distributed systems to support millions of students.
  • Have experience developing backend services that leverage relational databases (MySQL), non-relational datastores, message queuing and/or cloud infrastructure.
  • Partner with your peer engineers and product owners on getting new features and products to market.
  • Evolve data ownership and fluency across our teams, allowing us to bring new insights to customers easily and cost-effectively.
  • Develop and improve internal services, scripts, and tools that will be utilized by our teams.
  • Ensure that timely, accurate data and metrics are delivered consistently.
  • Take pride and ownership in producing high quality software, including appropriate test coverage and ability to troubleshoot production issues.

Qualifications

  • Advanced level SQL coding abilities
  • Significant professional experience in ETL / ELT using cloud-based databases at scale, such as MySQL, Postgres, BigQuery, Redshift, and/or Snowflake
  • Experience designing, coding, and supporting enterprise-scale ETL / ELT pipelines that balance efficiency and intuitive usage.
  • Experience architecting data warehouses, data lakes, and data meshes that are organized, performant, and easy to use.
  • Effective communicator who collaborates well with distributed engineering, product, and design teams.
  • Experience mentoring fellow team members and demonstrating how to stay proficient with new technologies, methodologies, and frameworks.

Bonus Points For:

  • Understanding K-12 education data
  • Experience with the tools we use, like AWS (including Glue and Lambda), Redshift, Data Build Tool (DBT), Snowflake, and Docker - plus experience with tools we don’t use, but should, and the wisdom to know when to recommend them
  • Enjoy working in a 100% remote environment - we have engineers across the US, as well as remote teams in Latin America.

About You:

  • Architecture: You are adept at working with your team to craft a scalable and resilient design. From technology selection to data model, to service design, you can deliver a solution to problems while acknowledging constraints.
  • Software Delivery: We partner closely with our Product and Design teams, so you will have a role in the entire product lifecycle from ideation through deployment. We iterate and learn through agile, lean and continuous delivery practices.
  • Technical Leadership: You are the technical lead for projects. You are comfortable guiding and collaborating with the team to deliver successful shipments together.

See more jobs at Renaissance

Apply for this job

+30d

Sr. Data Platform Engineer

iRhythmRemote US
golang8 years of experiencescalasqlDesignjavac++pythonAWS

iRhythm is hiring a Remote Sr. Data Platform Engineer

Boldly innovating to create trusted solutions that detect, predict, and prevent disease.

Discover your power to innovate while making a difference in patients' lives. iRhythm is advancing cardiac care…Join Us Now! 

At iRhythm, we are dedicated, self-motivated, and driven to do the right thing for our patients, clinicians, and coworkers. Our leadership is focused and committed to iRhythm’s employees and the mission of the company. We are better together, embrace change and help one another.  We are Thinking Bigger and Moving Faster.


 

About This Role

We are seeking a highly skilled and experienced Sr. Data Platform Engineer (Snowflake Specific Data Engineer) to join our team. In this role, you will be responsible for designing, developing, and maintaining our Snowflake data warehouse. You will work closely with our data analysts, architects, and developers to ensure that our Snowflake data warehouse is optimized for performance, scalability, and reliability.

Responsibilities:

  • Design and develop data pipelines and ETL processes for our Snowflake data warehouse
  • Develop CICD Pipelines and build custom data extractors/processors using Golang.
  • Build and maintain efficient and scalable data models in Snowflake
  • Collaborate with data analysts, architects, and developers to identify data requirements and implement data solutions
  • Develop and maintain documentation for data pipelines, data models, and data architecture
  • Optimize and tune Snowflake queries for performance and efficiency
  • Implement Snowflake security policies and access controls
  • Troubleshoot issues related to data ingestion, data processing, and data storage in Snowflake

About You

  • Bachelor's or Master's degree in Computer Science or a related field
  • Strong Golang programming experience.
  • Being able to build and deploy CICD Pipelines. CodeFresh experience preferred.
  • At least 8 years of experience as a data engineer with a strong focus on Snowflake
  • In-depth knowledge of Snowflake architecture and best practices
  • Strong SQL skills and experience with SQL optimization and tuning
  • Proficiency in at least one other programming language (Python, Java, or Scala)
  • Experience with ETL tools and data integration technologies
  • Familiarity with data modeling concepts and techniques
  • Strong problem-solving skills and attention to detail
  • Strong communication skills and ability to work collaboratively in a team environment
  • Experience with AWS services, such as S3, EC2, Lambda, and Glue
  • Experience with Golang programming language is a plus

What's In It For You

This is a regular full-time position with competitive compensation package, excellent benefits including medical, dental, and vision insurances (all of which start on your first day), health savings account employer contributions (when enrolled in high deductible medical plan), cafeteria plan pre-taxed benefits (FSA, dependent care FSA, commute reimbursement accounts), travel reimbursement for medical care, noncontributory basic life insurance & short/ long term disability. Additionally, we offer:

  • emotional health support for you and your loved ones
  • legal / financial / identity theft/ pet and child referral assistance
  • paid parental leave, paid holidays, travel assistance for personal trips and PTO!

iRhythm also provides additional benefits including 401(k) (with company match), an Employee Stock Purchase Plan, pet insurance discount, unlimited amount of Linked In Learning classes and so much more! 

FLSA Status: Exempt

#LI-LM-2

#LI-Remote


Actual compensation may vary depending on job-related factors including knowledge, skills, experience, and work location.


 

Estimated Pay Range
$133,500$194,400 USD

As a part of our core values, we ensure a diverse and inclusive workforce. We welcome and celebrate people of all backgrounds, experiences, skills, and perspectives. iRhythm Technologies, Inc. is an Equal Opportunity Employer. We will consider for employment all qualified applicants with arrest and conviction records in accordance with all applicable laws.

iRhythm provides reasonable accommodations for qualified individuals with disabilities in job application procedures, including those who may have any difficulty using our online system. If you need such an accommodation, you may contact us at taops@irhythmtech.com

About iRhythm Technologies
iRhythm is a leading digital healthcare company that creates trusted solutions that detect, predict, and prevent disease. Combining wearable biosensors and cloud-based data analytics with powerful proprietary algorithms, iRhythm distills data from millions of heartbeats into clinically actionable information. Through a relentless focus on patient care, iRhythm’s vision is to deliver better data, better insights, and better health for all.

Make iRhythm your path forward. Zio, the heart monitor that changed the game.

See more jobs at iRhythm

Apply for this job

+30d

Data Engineer I

sqlRabbitMQDesignmobileazurescrumqagitjavac++.netangularAWSfrontend

Signify Health is hiring a Remote Data Engineer I

How will this role have an impact?

A Software Engineer - Datadevelops systems to manage data flow throughout Signify Health’s infrastructure. This involves all elements of data engineering, such as ingestion, transformation, and distribution of data.

What will you do?

  • Deliver clean and functional code in accordance with business requirements.
  • Consume data from any source, such as flat files, streaming systems, or RESTful APIs  
  • Develop a high level understanding of data ecosystems and its impacts on adjacent systems.
  • Interface with Electronic Health Records
  • Engineer scalable, reliable, and performant systems to manage data
  • Collaborate closely with other Engineers, QA, Scrum master, Product Manager in your team as well as across the organization
  • Build quality systems while expanding offerings to dependent teams
  • Implement observability in the form of metrics, logging, and monitoring
  • Comfortable in multiple roles, from Design and Development to Code Deployment to and monitoring and investigating in production systems.

Requirements

  • Bachelors in Computer Science or equivalent
  • Proven ability to complete projects in a timely manner while clearly measuring progress
  • Strong software engineering fundamentals (data structures, algorithms, async programming patterns, object-oriented design, parallel programming)
  • Strong understanding and demonstrated experience with at least one popular programming language (.NET or Java) and SQL constructs.
  • Experience writing and maintaining frontend client applications, Angular preferred
  • Experience with revision control (Git)
  • High level understanding of cloud-based systems (Azure / AWS / GCP).
  • High level understanding of big data design (data lake, data mesh, data warehouse) and data normalization patterns
  • High level understanding of Queuing technologies (Kafka / SNS / RabbitMQ etc)
  • High level understanding of Metrics, Logging, Monitoring and Alerting tools
  • Strong communication skills

The base salary hiring range for this position is $56,300 to $81,100. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits.
In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  Eligible employees may enroll in a full range of medical, dental, and vision benefits, 401(k) retirement savings plan, and an Employee Stock Purchase Plan.  We also offer education assistance, free development courses, paid time off programs, paid holidays, a CVS store discount, and discount programs with participating partners.  

About Us:
Signify Health is helping build the healthcare system we all want to experience by transforming the home into the healthcare hub. We coordinate care holistically across individuals’ clinical, social, and behavioral needs so they can enjoy more healthy days at home. By building strong connections to primary care providers and community resources, we’re able to close critical care and social gaps, as well as manage risk for individuals who need help the most. This leads to better outcomes and a better experience for everyone involved.
Our high-performance networks are powered by more than 9,000 mobile doctors and nurses covering every county in the U.S., 3,500 healthcare providers and facilities in value-based arrangements, and hundreds of community-based organizations. Signify’s intelligent technology and decision-support services enable these resources to radically simplify care coordination for more than 1.5 million individuals each year while helping payers and providers more effectively implement value-based care programs.
To learn more about how we’re driving outcomes and making healthcare work better, please visit us at www.signifyhealth.com.

Diversity and Inclusion are core values at Signify Health, and fostering a workplace culture reflective of that is critical to our continued success as an organization.
We are committed to equal employment opportunities for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.

See more jobs at Signify Health

Apply for this job

+30d

Data Engineer II

Signify HealthDallas, Texas / Remote
terraformsqlRabbitMQDesignmobileazurescrumqagitjavac++.netangularAWSfrontend

Signify Health is hiring a Remote Data Engineer II

How will this role have an impact?

A Software Engineer - Datadevelops systems to manage data flow throughout Signify Health’s infrastructure. This involves all elements of data engineering, such as ingestion, transformation, and distribution of data.

What will you do?

  • Communicate with business leaders to help translate requirements into functional specification
  • Develop broad understanding of business logic and functionality of current systems
  • Analyze and manipulate data by writing and running SQL queries
  • Analyze logs to identify and prevent potential issues from occurring
  • Deliver clean and functional code in accordance with business requirements
  • Consume data from any source, such a flat files, streaming systems, or RESTful APIs    
  • Interface with Electronic Health Records
  • Engineer scalable, reliable, and performant systems to manage data
  • Collaborate closely with other Engineers, QA, Scrum master, Product Manager in your team as well as across the organization
  • Build quality systems while expanding offerings to dependent teams
  • Comfortable in multiple roles, from Design and Development to Code Deployment to and monitoring and investigating in production systems.

Requirements

  • Bachelors in Computer Science or equivalent
  • Proven ability to complete projects in a timely manner while clearly measuring progress
  • Strong software engineering fundamentals (data structures, algorithms, async programming patterns, object-oriented design, parallel programming) 
  • Strong understanding and demonstrated experience with at least one popular programming language (.NET or Java) and SQL constructs.
  • Experience writing and maintaining frontend client applications, Angular preferred
  • Strong experience with revision control (Git)
  • Experience with cloud-based systems (Azure / AWS / GCP).
  • High level understanding of big data design (data lake, data mesh, data warehouse) and data normalization patterns
  • Demonstrated experience with Queuing technologies (Kafka / SNS / RabbitMQ etc
  • Demonstrated experience with Metrics, Logging, Monitoring and Alerting tools
  • Strong communication skills
  • Strong experience with use of RESTful APIs
  • High level understanding of HL7 V2.x / FHIR based interface messages.
  • High level understanding of system deployment tasks and technologies. (CI/CD Pipeline, K8s, Terraform)

The base salary hiring range for this position is $72,100 to $125,600. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits.
In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  Eligible employees may enroll in a full range of medical, dental, and vision benefits, 401(k) retirement savings plan, and an Employee Stock Purchase Plan.  We also offer education assistance, free development courses, paid time off programs, paid holidays, a CVS store discount, and discount programs with participating partners.  

About Us:
Signify Health is helping build the healthcare system we all want to experience by transforming the home into the healthcare hub. We coordinate care holistically across individuals’ clinical, social, and behavioral needs so they can enjoy more healthy days at home. By building strong connections to primary care providers and community resources, we’re able to close critical care and social gaps, as well as manage risk for individuals who need help the most. This leads to better outcomes and a better experience for everyone involved.
Our high-performance networks are powered by more than 9,000 mobile doctors and nurses covering every county in the U.S., 3,500 healthcare providers and facilities in value-based arrangements, and hundreds of community-based organizations. Signify’s intelligent technology and decision-support services enable these resources to radically simplify care coordination for more than 1.5 million individuals each year while helping payers and providers more effectively implement value-based care programs.
To learn more about how we’re driving outcomes and making healthcare work better, please visit us at www.signifyhealth.com.

Diversity and Inclusion are core values at Signify Health, and fostering a workplace culture reflective of that is critical to our continued success as an organization.
We are committed to equal employment opportunities for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.

See more jobs at Signify Health

Apply for this job

+30d

Data Engineer III

Signify HealthDallas, TX / Remote
terraformsqlRabbitMQDesignmobileazurescrumqagitjavac++.netangularAWSfrontend

Signify Health is hiring a Remote Data Engineer III

How will this role have an impact?

A Software Engineer - Datadevelops systems to manage data flow throughout Signify Health’s infrastructure. This involves all elements of data engineering, such as ingestion, transformation, and distribution of data.

What will you do?

  • Communicate with business leaders to help translate requirements into functional specification
  • Develop broad understanding of business logic and functionality of current systems
  • Analyze and manipulate data by writing and running SQL queries
  • Analyze logs to identify and prevent potential issues from occurring
  • Deliver clean and functional code in accordance with business requirements
  • Consume data from any source, such a flat files, streaming systems, or RESTful APIs    
  • Interface with Electronic Health Records
  • Engineer scalable, reliable, and performant systems to manage data
  • Collaborate closely with other Engineers, QA, Scrum master, Product Manager in your team as well as across the organization
  • Build quality systems while expanding offerings to dependent teams
  • Comfortable in multiple roles, from Design and Development to Code Deployment to and monitoring and investigating in production systems.

Requirements

  • Bachelors in Computer Science or equivalent
  • Proven ability to complete projects in a timely manner while clearly measuring progress
  • Strong software engineering fundamentals (data structures, algorithms, async programming patterns, object-oriented design, parallel programming) 
  • Strong understanding and demonstrated experience with at least one popular programming language (.NET or Java) and SQL constructs.
  • Experience writing and maintaining frontend client applications, Angular preferred
  • Strong experience with revision control (Git)
  • Experience with cloud-based systems (Azure / AWS / GCP).
  • High level understanding of big data design (data lake, data mesh, data warehouse) and data normalization patterns
  • Demonstrated experience with Queuing technologies (Kafka / SNS / RabbitMQ etc
  • Demonstrated experience with Metrics, Logging, Monitoring and Alerting tools
  • Strong communication skills
  • Strong experience with use of RESTful APIs
  • High level understanding of HL7 V2.x / FHIR based interface messages.
  • High level understanding of system deployment tasks and technologies. (CI/CD Pipeline, K8s, Terraform)

The base salary hiring range for this position is $92,300 to $160,800. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits.
In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  Eligible employees may enroll in a full range of medical, dental, and vision benefits, 401(k) retirement savings plan, and an Employee Stock Purchase Plan.  We also offer education assistance, free development courses, paid time off programs, paid holidays, a CVS store discount, and discount programs with participating partners.  

About Us:
Signify Health is helping build the healthcare system we all want to experience by transforming the home into the healthcare hub. We coordinate care holistically across individuals’ clinical, social, and behavioral needs so they can enjoy more healthy days at home. By building strong connections to primary care providers and community resources, we’re able to close critical care and social gaps, as well as manage risk for individuals who need help the most. This leads to better outcomes and a better experience for everyone involved.
Our high-performance networks are powered by more than 9,000 mobile doctors and nurses covering every county in the U.S., 3,500 healthcare providers and facilities in value-based arrangements, and hundreds of community-based organizations. Signify’s intelligent technology and decision-support services enable these resources to radically simplify care coordination for more than 1.5 million individuals each year while helping payers and providers more effectively implement value-based care programs.
To learn more about how we’re driving outcomes and making healthcare work better, please visit us at www.signifyhealth.com.

Diversity and Inclusion are core values at Signify Health, and fostering a workplace culture reflective of that is critical to our continued success as an organization.
We are committed to equal employment opportunities for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.

See more jobs at Signify Health

Apply for this job

+30d

Senior Data Integration Engineer (Remote)

Trace3Remote
agilenosqlsqlDesignazuregraphqlapijavac++c#pythonbackend

Trace3 is hiring a Remote Senior Data Integration Engineer (Remote)

 


Who is Trace3?

Trace3 is a leading Transformative IT Authority, providing unique technology solutions and consulting services to our clients. Equipped with elite engineering and dynamic innovation, we empower IT executives and their organizations to achieve competitive advantage through a process of Integrate, Automate, Innovate.

Our culture at Trace3 embodies the spirit of a startup with the advantage of a scalable business. Employees can grow their career and have fun while doing it!

Trace3 is headquartered in Irvine, California. We employ more than 1,200 people all over the United States. Our major field office locations include Denver, Indianapolis, Grand Rapids, Lexington, Los Angeles, Louisville, Texas, San Francisco.  

Ready to discover the possibilities that live in technology?

 

Come Join Us!

Street-Smart Thriving in Dynamic Times

We are flexible and resilient in a fast-changing environment. We continuously innovate and drive constructive change while keeping a focus on the “big picture.” We exercise sound business judgment in making high-quality decisions in a timely and cost-effective manner. We are highly creative and can dig deep within ourselves to find positive solutions to different problems.

Juice - The “Stuff” it takes to be a Needle Mover

We get things done and drive results. We lead without a title, empowering others through a can-do attitude. We look forward to the goal, mentally mapping out every checkpoint on the pathway to success, and visualizing what the final destination looks and feels like.

Teamwork - Humble, Hungry and Smart

We are humble individuals who understand how our job impacts the company's mission. We treat others with respect, admit mistakes, give credit where it’s due and demonstrate transparency. We “bring the weather” by exhibiting positive leadership and solution-focused thinking. We hug people in their trials, struggles, and failures – not just their success. We appreciate the individuality of the people around us.

 

About the Role:

We’re looking to add a Senior Data Integration Engineer with a strong background in data engineering and development.  You will work with a team of software and data engineers to build client-facing data-first solutions utilizing data technologies such as SQL Server and MongoDB. You will develop data pipelines to transform/wrangle/integrate the data into different data zones.

To be successful in this role, you will need to hold extensive knowledge of SQL, relational databases, ETL pipelines, and big data fundamentals.  You will also need to possess strong experience in the development and consumption of RESTful APIs.  The ideal candidate will also be a strong independent worker and learner.

What You’ll Do:

  • Develop processes and data models for consuming large quantities of 3rd party vendor data via RESTful APIs.
  • Develop data processing pipelines to analyze, transform, and migrate data between applications and systems.
  • Analyze data from multiple sources and negotiate differences in storage schema using the ETL process.
  • Develop APIs for external consumption by partners and customers.
  • Develop and support our ETL environment by recommending improvements, monitoring, and deploying quality and validation processes to ensure accuracy and integrity of data.
  • Design, develop, test, deploy, maintain, and improve data integration pipelines.
  • Create technical solutions that solve business problems and are well engineered, operable, and maintainable.
  • Design and implement tools to detect data anomalies (observability). Ensure that data is accurate, complete, and high quality across all platforms.
  • Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.
  • Assembles large and complex data sets; develops data models based on specifications using structured data sets.
  • Develops familiarity with emerging and complex automations and technologies that support business processes.
  • Develops scalable and re-usable frameworks for ingestion and transformation of large datasets.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
  • Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
  • Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
  • Engage in logical and physical design of databases, table creation, script creation, views, procedures, packages, and other database objects.
  • Create documentation for solutions and processes implemented or updated to ensure team members and stakeholders can correctly interpret it.
  • Design and implement processes and/or process improvements to help the development of technology solutions.

Qualifications & Interests:

  • 5+ years of relational database development experience; including SQL query generation and tuning, database design, and data concepts.
  • 5+ years of backend and Restful API development experience in NodeJS (experience with GraphQL a plus).
  • 5+ years of development experience with the following languages Python, Java, C#/ .NET.
  • 5+ years of experience with SQL and NoSQL databases; including MS SQL and MongoDB.
  • 5+ years consuming RESTful APIs with data ingestion and storage.
  • 5+ years developing RESTful APIs for use by customers and 3rd
  • 3+ years of professional work experience designing and implementing data pipelines in a cloud environment.
  • 3+ years of experience working within Azure cloud.
  • Experience in integrating and ingesting data from external data sources.
  • Strong diagnostic skills and ability to research, troubleshoot, and logically determine solutions.
  • Ability to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment.
  • Comfortable managing multiple and changing priorities, and meeting deadlines.
  • Highly organized, detail-oriented, excellent time management skills.
  • Excellent written and verbal communication skills.

The Perks:

  • Comprehensive medical, dental and vision plans for you and your dependents
  • 401(k) Retirement Plan with Employer Match, 529 College Savings Plan, Health Savings Account, Life Insurance, and Long-Term Disability
  • Competitive Compensation
  • Training and development programs
  • Stocked kitchen with snacks and beverages
  • Collaborative and cool culture
  • Work-life balance and generous paid time off

 

***To all recruitment agencies: Trace3 does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Trace3 employees or any other company location. Trace3 is not responsible for any fees related to unsolicited resumes/CVs.

Actual salary will be based on a variety of factors, including location, experience, skill set, performance, licensure and certification, and business needs. The range for this position in other geographic locations may differ. Certain positions may also be eligible for variable incentive compensation, such as bonuses or commissions, that is not included in the base salary.
Estimated Pay Range
$125,600$163,800 USD

See more jobs at Trace3

Apply for this job