Data Engineer Remote Jobs

108 Results

+30d

Senior Data Engineer - AWS (Remote)

Fannie MaeReston, VA, Remote
agileBachelor degreesqloraclemongodbuiscrummysqlpythonAWS

Fannie Mae is hiring a Remote Senior Data Engineer - AWS (Remote)

Job Description

As a valued colleague on our team, you will contribute to developing data infrastructure and pipelines to capture, integrate, organize, and centralize data while testing and ensuring the data is readily accessible and in a usable state, including quality assurance.

THE IMPACT YOU WILL MAKE
The Senior Data Engineer role will offer you the flexibility to make each day your own, while working alongside people who care so that you can deliver on the following responsibilities:

  • Identify customer needs and intended use of requested data in the development of database requirements and support the planning and engineering of enterprise databases.
  • Maintain comprehensive knowledge of database technologies, complex coding languages, and computer system skills.
  • Support the integration of data into readily available formats while maintaining existing structures and govern their use according to business requirements.
  • Analyze new data sources and monitor the performance, scalability, and security of data.
  • Create an initial analysis and deliver the user interface (UI) to the customer to enable further analysis.

 

Qualifications

THE EXPERIENCE YOU BRING TO THE TEAM

Minimum Required Experiences:

  • 2+ years with Big Data Hadoop cluster (HDFS, Yarn, Hive, MapReduce frameworks), Spark, AWS EMR
  • 2+ years of recent experience with building and deploying applications in AWS (S3, Hive, Glue, AWS Batch, Dynamo DB, Redshift, AWS EMR,  Cloudwatch, RDS, Lambda, SNS, SWS etc.)
  • 2+ years of Python, SQL, SparkSQL, PySpark
  • Excellent problem-solving skills and strong verbal & written communication skills
  • Ability to work independently as well as part of an agile team (Scrum / Kanban)


Desired Experiences:

  • Bachelor degree or equivalent
  • Knowledge of Spark streaming technologies 
  • Experience in working with agile development teams
  • Familiarity with Hadoop / Spark information architecture, Data Modeling, Machine Learning (ML)
  • Knowledge of Environmental, Social, and Corporate Governance (ESG)

Skills

  • Skilled in cloud technologies and cloud computing
  • Programming including coding, debugging, and using relevant programming languages
  • Experience in the process of analyzing data to identify trends or relationships to inform conclusions about the data
  • Skilled in creating and managing databases with the use of relevant software such as MySQL, Hadoop, or MongoDB
  • Skilled in discovering patterns in large data sets with the use of relevant software such as Oracle Data Mining or Informatica
  • Experience using software and computer systems' architectural principles to integrate enterprise computer applications such as xMatters, AWS Application Integration, or WebSphere
  • Working with people with different functional expertise respectfully and cooperatively to work toward a common goal
  • Communication including communicating in writing or verbally, copywriting, planning and distributing communication, etc.

Tools

  • Skilled in AWS Analytics such as Athena, EMR, or Glue
  • Skilled in AWS Database products such as Neptune, RDS, Redshift, or Aurora
  • Skilled in SQL
  • Skilled in AWS Compute such as EC2, Lambda, Beanstalk, or ECS
  • Skilled in Amazon Web Services (AWS) offerings, development, and networking platforms
  • Skilled in AWS Management and Governance suite of products such as CloudTrail, CloudWatch, or Systems Manager
  • Skilled in Python object-oriented programming

See more jobs at Fannie Mae

Apply for this job

+30d

Sr. Data Engineer

agileterraformairflowpostgressqlDesignapic++dockerkubernetesjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipelines

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting language
  • Bachelor's degree in Computer Science, Engineering, or related field, or relevant years of work experience 
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience with Databricks platform
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)
  • Thorough understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors. We don’t ever want the pay range to act as a deterrent from you applying!

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Data Engineer

Maker&Son LtdBalcombe, United Kingdom, Remote
golangtableauairflowsqlmongodbelasticsearchpythonAWS

Maker&Son Ltd is hiring a Remote Data Engineer

Job Description

We are looking for a highly motivated individual to join our team as a Data Engineer.

We are based in Balcombe [40 mins from London by train, 20 minutes from Brighton] and we will need you to be based in our offices at least 3 days a week.

You will report directly to the Head of Data.

Candidate Overview

As a part of the Technology Team your core responsibility will be to help maintain and scale our infrastructure for analytics as our data volume and needs continue to grow at a rapid pace. This is a high impact role, where you will be driving initiatives affecting teams and decisions across the company and setting standards for all our data stakeholders. You’ll be a great fit if you thrive when given ownership, as you would be the key decision maker in the realm of architecture and implementation.

Responsibilities

  • Understand our data sources, ETL logic, and data schemas and help craft tools for managing the full data lifecycle
  • Play a key role in building the next generation of our data ingestion pipeline and data warehouse
  • Run ad hoc analysis of our data to answer questions and help prototype solutions
  • Support and optimise existing ETL pipelines
  • Support technical and business stakeholders by providing key reports and supporting the BI team to become fully self-service
  • Own problems through to completion both individually and as part of a data team
  • Support digital product teams by performing query analysis and optimisation

 

Qualifications

Key Skills and Requirements

  • 3+ years experience as a data engineer
  • Ability to own data problems and help to shape the solution for business challenges
  • Good communication and collaboration skills; comfortable discussing projects with anyone from end users up to the executive company leadership
  • Fluency with a programming language - we use NodeJS and Python but looking to use Golang
  • Ability to write and optimise complex SQL statements
  • Familiarity with ETL pipeline tools such as Airflow or AWS Glue
  • Familiarity with data visualisation and reporting tools, like Tableau, Google Data Studio, Looker
  • Experience working in a cloud-based software development environment, preferably with AWS or GCP
  • Familiarity with no-SQL databases such as ElasticSearch, DynamoDB, or MongoDB

See more jobs at Maker&Son Ltd

Apply for this job

+30d

Data Engineer--US Citizens/Green Card

Software Technology IncBrentsville, VA, Remote
nosqlsqlazureapigit

Software Technology Inc is hiring a Remote Data Engineer--US Citizens/Green Card

Job Description

I am a Lead Talent Acquisition Specialist at STI (Software Technology Inc) and currently looking for a Data Engineer.

Below is a detailed job description. Should you be interested, please feel free to reach me via call or email. Amrutha.duddula@ AT tiorg.com/732-664-8807

Title:  Data Engineer
Location: Manassas, VA (Remote until Covid)
Duration: Long Term Contract

 Required Skills:

•             Experience working in Azure Databricks, Apache Spark
•             Proficient programming in Scala/Python/Java
•             Experience developing and deploying data pipelines for streaming and batch data sources getting from multiple sources
•             Experience creating data models and implementing business logic using tools and languages listed
•             Working knowledge in Kafka, Structured Streaming, DataFrame API, SQL, NoSQL Database
•             Comfortable with API, Azure Datalake, Git, Notebooks, Spark Cluster, Spark Jobs, Performance tuning
•             Must have excellent communication skills
•             Familiarity with Power BI, Delta Lake, Lambda Architecture, Azure Data Factory, Azure Synapse a plus
•             Telecom domain experience not necessary but really helpful

Thank you,
Amrutha Duddula
Lead Talent Acquisition Specialist
Software Technology Inc (STI)

Email: amrutha.duddula@ AT tiorg.com
Phone : 732-664-8807
www.stiorg.com
www.linkedin.com/in/amruthad/

Qualifications

See more jobs at Software Technology Inc

Apply for this job

+30d

Senior Data science Engineer - Remote

RapidSoft CorpReston, VA, Remote
agileDesignjavapython

RapidSoft Corp is hiring a Remote Senior Data science Engineer - Remote

Job Description

Duties and Responsibilities: • Develop data solutions in collaboration with other team members and software engineering teams that meet and anticipate business goals and strategies • Work with senior data science engineers in analyzing and understanding all aspects of data, including source, design, insight, technology and modeling • Develop and manage scalable data processing platforms for both exploratory and real-time analytics • Oversee and develop algorithms for quick data acquisition, analysis and evolution of the data model to improve search and recommendation engines. • Document and demonstrate solutions • Design system specifications and provide standards and best practices • Support and mentor junior data engineers by providing advice and coaching • Make informed decisions quickly and taking ownership of services and applications at scale • Be a persistent, creative problem solver, constantly striving to improve and iterate on both processes and technical solutions • Remain cool and effective in a crisis • Understand business needs and know how to create the tools to manage them • Take initiative, own the problem and own the solution • Other duties as assigned  Supervisory Responsibilities: • None  Minimum Qualifications: • Bachelor's Degree in Data Engineering, Computer Science, Information Technology, or a related discipline (or equivalent experience) • 8+ years experience in data engineering development • 5+ years experience working in object oriented programming languages such as Python or Java • Experience working in an Agile environment

Qualifications

See more jobs at RapidSoft Corp

Apply for this job

+30d

Data Engineer (F/H)

ASILyon, France, Remote
oracle

ASI is hiring a Remote Data Engineer (F/H)

Description du poste

VOS MISSIONS 

  • Analyser et comprendre les besoins client
  • Participer aux différentes phases :

           - Conception d’architecture décisionnelle et technique

           - Rédaction des spécifications fonctionnelles et techniques

           - Traitement de la donnée, concevoir et spécifier les jobs d’alimentation

           - Construction de cubes de données

           - Développement des rapports

           - Tests et recettes

           - Mise en production

Qualifications

  • Vous êtes prêt à monter en compétences sur de nouvelles solutions, vous avez envie d’élargir votre spectre technologique.
  • Vous êtes curieux, rigoureux, analytique, dynamique et doté d'une grande capacité d'adaptation.
  • Vous avez de l'appétence et/ou connaissance des nouveaux concepts de la data (Data Mining, Data Visualisation, Data Lake, …)

De formation supérieure, vous avez une expérience réussie sur un poste similaire avec les compétences suivantes :

           - Travailler sur des projets de Data Intelligence, de façon autonome ou au sein d’une équipe

           - Maîtrise d’une ou plusieurs solutions du marché

           - Alimentation et stockage de données : SSIS, Talend, Oracle Data Integrator, SAP Data Services, …

           - Analyse : SSAS, SAS, …

            - Restitution : SSRS, SAP BO, QlikView, Qlik Sense, Power BI

À compétences égales ce poste est ouvert aux personnes en situation de handicap

Ensemble, nous saurons développer vos compétences et enrichir votre expérience ! 

Alors rejoignez-notre Team ASI !

See more jobs at ASI

Apply for this job

+30d

Senior Data Engineer

RemoteRemote-Southeast Asia
airflowsqljenkinspython

Remote is hiring a Remote Senior Data Engineer

About Remote

Remote is solving global remote organizations’ biggest challenge: employing anyone anywhere compliantly. We make it possible for businesses big and small to employ a global team by handling global payroll, benefits, taxes, and compliance. Check out remote.com/how-it-works to learn more or if you’re interested in adding to the mission, scroll down to apply now.

Please take a look at remote.com/handbook to learn more about our culture and what it is like to work here. Not only do we encourage folks from all ethnic groups, genders, sexuality, age and abilities to apply, but we prioritize a sense of belonging. You can check out independent reviews by other candidates on Glassdoor or look up the results of our candidate surveys to see how others feel about working and interviewing here.

All of our positions are fully remote. You do not have to relocate to join us!

What this job can offer you

This is an exciting time to join the growing Data Team at Remote, which today consists of over 15 Data Engineers, Analytics Engineers and Data Analysts spread across 10+ countries. Throughout the team we're focused on driving business value through impactful decision making. We're in a transformative period where we're laying the foundations for scalable company growth across our data platform, which truly serves every part of the Remote business. This team would be a great fit for anyone who loves working collaboratively on challenging data problems, and making an impact with their work. We're using a variety of modern data tooling on the AWS platform, such as Snowflake and dbt, with SQL and python being extensively employed.

This is an exciting time to join Remote and make a personal difference in the global employment space as a Senior Data Engineer, joining our Data team, composed of Data Analysts and Data Engineers. We support the decision making and operational reporting needs by being able to translate data into actionable insights to non-data professionals at Remote. We’re mainly using SQL, Python, Meltano, Airflow, Redshift, Metabase and Retool.

What you bring

  • 5+ years of experience in data engineering; high-growth tech company experience is a plus
  • Strong experience with building data extraction/transformation pipelines (e.g. Meltano, Airbyte) and orchestration platforms (e.g. Airflow)
  • Strong experience in working with SQL, data warehouses (e.g. Redshift) and data transformation workflows (e.g. dbt)
  • Solid experience using CI/CD (e.g. Gitlab, Github, Jenkins)
  • Experience with data visualization tools (e.g. Metabase) is considered a plus
  • A self-starter mentality and the ability to thrive in an unstructured and fast-paced environment
  • You have strong collaboration skills and enjoy mentoring
  • You are a kind, empathetic, and patient person
  • Writes and speaks fluent English
  • It's not required to have experience working remotely, but considered a plus

Key Responsibilities

  • Playing a key role in Data Platform Development & Maintenance:
    • Managing and maintaining the organization's data platform, ensuring its stability, scalability, and performance.
    • Collaboration with cross-functional teams to understand their data requirements and optimize data storage and access, while protecting data integrity and privacy.
    • Development and testing architectures that enable data extraction and transformation to serve business needs.
  • Improving further our Data Pipeline & Monitoring Systems:
    • Designing, developing, and deploying efficient Extract, Load, Transform (ELT) processes to acquire and integrate data from various sources into the data platform.
    • Identifying, evaluating, and implementing tools and technologies to improve ELT pipeline performance and reliability.
    • Ensuring data quality and consistency by implementing data validation and cleansing techniques.
    • Implementing monitoring solutions to track the health and performance of data pipelines and identify and resolve issues proactively.
    • Conducting regular performance tuning and optimization of data pipelines to meet SLAs and scalability requirements.
  • Dig deep into DBT Modelling:
    • Designing, developing, and maintaining DBT (Data Build Tool) models for data transformation and analysis.
    • Collaboration with Data Analysts to understand their reporting and analysis needs and translate them into DBT models, making sure they respect internal conventions and best practices.
  • Driving our Culture of Documentation:
    • Creating and maintaining technical documentation, including data dictionaries, process flows, and architectural diagrams.
    • Collaborating with cross-functional teams, including Data Analysts, SREs (Site Reliability Engineers) and Software Engineers, to understand their data requirements and deliver effective data solutions.
    • Sharing knowledge and offer mentorship, providing guidance and advice to peers and colleagues, creating an environment that empowers collective growth

Practicals

  • You'll report to: Engineering Manager - Data
  • Team: Data 
  • Location:For this position we welcome everyone to apply, but we will prioritise applications from the following locations as we encourage our teams to diversify; Vietnam, Indonesia, Taiwan and South-Korea
  • Start date: As soon as possible

Remote Compensation Philosophy

Remote's Total Rewards philosophy is to ensure fair, unbiased compensation and fair equitypayalong with competitive benefits in all locations in which we operate. We do not agree to or encourage cheap-labor practices and therefore we ensure to pay above in-location rates. We hope to inspire other companies to support global talent-hiring and bring local wealth to developing countries.

At first glance our salary bands seem quite wide - here is some context. At Remote we have international operations and a globally distributed workforce.  We use geo ranges to consider geographic pay differentials as part of our global compensation strategy to remain competitive in various markets while we hiring globally.

The base salary range for this full-time position is $53,500 USD to $131,300 USD. Our salary ranges are determined by role, level and location, and our job titles may span more than one career level. The actual base pay for the successful candidate in this role is dependent upon many factors such as location, transferable or job-related skills, work experience, relevant training, business needs, and market demands. The base salary range may be subject to change.

Application process

  1. Interview with recruiter
  2. Interview with future manager
  3. Async exercise stage 
  4. Interview with team members

#LI-DP

Benefits

Our full benefits & perks are explained in our handbook at remote.com/r/benefits. As a global company, each country works differently, but some benefits/perks are for all Remoters:
  • work from anywhere
  • unlimited personal time off (minimum 4 weeks)
  • quarterly company-wide day off for self care
  • flexible working hours (we are async)
  • 16 weeks paid parental leave
  • mental health support services
  • stock options
  • learning budget
  • home office budget & IT equipment
  • budget for local in-person social events or co-working spaces

How you’ll plan your day (and life)

We work async at Remote which means you can plan your schedule around your life (and not around meetings). Read more at remote.com/async.

You will be empowered to take ownership and be proactive. When in doubt you will default to action instead of waiting. Your life-work balance is important and you will be encouraged to put yourself and your family first, and fit work around your needs.

If that sounds like something you want, apply now!

How to apply

  1. Please fill out the form below and upload your CV with a PDF format.
  2. We kindly ask you to submit your application and CV in English, as this is the standardised language we use here at Remote.
  3. If you don’t have an up to date CV but you are still interested in talking to us, please feel free to add a copy of your LinkedIn profile instead.

We will ask you to voluntarily tell us your pronouns at interview stage, and you will have the option to answer our anonymous demographic questionnaire when you apply below. As an equal employment opportunity employer it’s important to us that our workforce reflects people of all backgrounds, identities, and experiences and this data will help us to stay accountable. We thank you for providing this data, if you chose to.

See more jobs at Remote

Apply for this job

+30d

Data Engineer

IncreasinglyBengaluru, India, Remote
DesigngitjenkinspythonAWS

Increasingly is hiring a Remote Data Engineer

Job Description

Working experience in data integration and pipeline development

Qualifications

3+ years of relevant experience with AWS Cloud on data integration with Databricks,Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems

Strong real-life experience in python development especially in pySpark in the AWS Cloud environment.

Design, develop, test, deploy, maintain and improve data integration pipeline.

Experience in Python and common python libraries.

Strong analytical experience with the database in writing complex queries, query optimization, debugging, user-defined functions, views, indexes, etc.

Strong experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools.

 

See more jobs at Increasingly

Apply for this job

+30d

Data Engineer (m/w/d)

rheindata GmbHKöln, Germany, Remote
sqlazurejavalinuxpythonAWS

rheindata GmbH is hiring a Remote Data Engineer (m/w/d)

Stellenbeschreibung

Als Data Engineer (m/w/d) 

  • führst du zusammen mit Teams von namhaften Kunden komplexe Lösungen basierend auf modernen Data Engineering-Technologien ein.
  • Mit einer eigenverantwortlichen, strukturierten Arbeitsweise entwickelst du zusammen mit Business Analysten Lösungen auf Kundensystemen, basierend auf abgestimmten, individuellen Anforderungen. 
  • Dabei verantwortest du immer den gesamten Lifecycle der Datenbereitstellung (ETL) und arbeitest in einem agilen Entwicklungsumfeld.

Qualifikationen

Das bringst du mit:

  • du verfügst über ein abgeschlossenes Bachelor-/Masterstudium mit informationstechnischem Schwerpunkt
  • du hast fortgeschrittene Kenntnisse in der ETL-Entwicklung im SQL- und Big Data-Umfeld
  • Kommunikationsstärke,
  • sehr gute analytische Fähigkeiten,
  • sehr gute Deutschkenntnisse (C2),
  • Englisch fließend in Wort und Schrift.

Zudem verfügst du über sehr gute Kenntnisse und Erfahrungen in einigen der folgenden Technologien und Tools:

  • AWS (Glue, ECS, Athena, EMR, Redshift, Kinesis)
  • Azure (Datafactory, Databricks, Synapse, Analysis Services)
  • Google Cloud Platform (Big Query, Cloud Storage, Cloud SQL)
  • Spark
  • Python und/oder Java
  • SQL
  • Linux

 

 

See more jobs at rheindata GmbH

Apply for this job

+30d

Data Engineer

AmpleInsightIncToronto, Canada, Remote
airflowsqlpython

AmpleInsightInc is hiring a Remote Data Engineer

Job Description

We are looking for a data engineer who is passionate about analytics and helping companies build and scale data. You enjoy working with data and are motivated to produce high quality data tools and pipelines that help empower other data scientists. You are experienced in architecting data ETL workflows and schemas. Critical thinking and problem-solving skills are essential for this role.

Qualifications

  • BS (or higher, e.g., MS, or PhD) in Computer Science, Engineering, Math, or Statistics
  • Hands on experience working with user engagement, social, marketing, and/or finance data
  • Proficient in Python (i.e. Pandas, Numpy, scikit-learn, etc), R, TensorFlow, amongst other data science related tools and libraries
  • Extensive experience working on relational databases, designing complex data schemas, and writing SQL queries
  • Deep knowledge on performance tuning of ETL Jobs, SQL, and databases
  • Working knowledge of Snowflake
  • Experience working with Airflow is a strong plus
  • Devops experiences is a plus

See more jobs at AmpleInsightInc

Apply for this job

+30d

Senior Data Engineer (Remote)

SalsaMobiAustin, TX, Remote
nosqlDesignazure

SalsaMobi is hiring a Remote Senior Data Engineer (Remote)

Job Description

We are looking for a Full-Time Senior Data Engineer to:

  • Work as part of a small, supercharged team
  • Collaborate with Product Managers, Architects, and Engineering leaders to define, build and architect new customer-facing features
  • Write clean, reusable, testable and efficient code

As a member of the Data Platform team with our Client, you will help advance the platform to allow the organization to make data-driven decisions and also improve the product experience. To achieve this, the team currently leverages the power of the Google Cloud Platform and existing open source technologies.


The team is also responsible for building tooling around data while also promoting best practices around it with a focus on user data privacy. We are looking for a driven, detail-oriented and passionate engineer to come to join our Client's Data Platform team.

Qualifications

  • 5+ years of data engineering experience
  • Tech Stack: Azure, PowerBI, ETL, Data Warehouse
  • Experience working with Snowflake and DynamoDB
  • Extensive programming experience preferably in Python/Java
  • Hands-on experience in data modeling, data pipeline design, and development
  • Good understanding of data processing frameworks and tools (e.g Beam, Spark, Hive, Kafka, etc)
  • Proficient in relational as well as NoSQL data stores, methods, and approaches  
  • Experience with Google Cloud or similar cloud provider is nice to have
  • Good communication in English.

See more jobs at SalsaMobi

Apply for this job

+30d

Data Engineer

JLIConsultingVaughan, Canada, Remote
oracleazureapigitAWS

JLIConsulting is hiring a Remote Data Engineer

Job Description

Data Engineer Job Responsibilities:

 

•       Work with stakeholders to understand data sources and Data, Analytics and Reporting team strategy in supporting within our on-premises environment and enterprise AWS cloud solution

•       Work closely with Data, Analytics and Reporting Data Management and Data Governance teams to ensure all industry standards and best practices are met

•       Ensure metadata and data lineage is captured and compatible with enterprise metadata and data management tools and processes

•       Run quality assurance and data integrity checks to ensure accurate reporting and data records

•       Ensure ETL pipelines are produced with the highest quality standards, metadata and validated for completeness and accuracy

•       Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.

•       Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.

•       Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.

•       Writes unit/integration tests, contributes to engineering wiki, and documents work.

•       Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.

•       Defines company data assets (data models), spark, sparkSQL jobs to populate data models.

•       Designs data integrations and data quality framework.

•       Designs and evaluates open source and vendor tools for data lineage.

•       Works closely with all business units and engineering teams to develop strategy for long term data platform architecture.

•       Focusing on structured problem solving

•       Phenomenal communication and business awareness

•       Working with ETL tools, Querying languages, and data repositories

•       Support of technical Data Management Solutions

•       Provide support to the development and testing teams to resolve data issues

Qualifications

•       Experience in database, storage, collection and aggregation models, techniques, and technologies – and how to apply them in business

•       Working knowledge of source code control tool such as GIT

•       Knowledge about file formats (e.g. XML, CSV, JSON), databases (e.g. Redshift, Oracle) and different type of connectivity is also very useful.

•       Working experience with the following Cloud platforms is a plus: Amazon Web Services, Google Cloud Platform, Azure

•       Working experience with data modeling, relational modeling, and dimensional modeling

•       The interpersonal skills: You have a way of speaking that engages your audience and instills confidence and credibility. You know how to leverage communication tools and methodologies. You can build relationships internal and external team members, positioning yourself as a trusted advisor. You are always looking for ways to improve processes, and you always ensure your communications have been received and are clearly understood. Your commitment and focus influence those around you to do better.

See more jobs at JLIConsulting

Apply for this job

+30d

Director, Data Engineering

ecobeeRemote in Canada
agileterraformairflowDesigndockerAWS

ecobee is hiring a Remote Director, Data Engineering

Hi, we are ecobee. 

ecobee introduced the world’s first smart Wi-Fi thermostat to help millions of consumers save money, conserve energy, and bring home automation into their lives. That was just the beginning. We continue our pursuit to create technology that brings peace of mind into the home and allows people to focus on the moments that matter most. We take pride in making a meaningful difference to the environment, all while being part of the exciting, connected home revolution. 

In 2021, ecobee became a subsidiary of Generac Power Systems.Generac introduced the first affordable backup generator and later created the category of automatic home standby generator. The company is committed to sustainable, cleaner energy products poised to revolutionize the 21st century electrical grid. Together,we take pride in making a meaningful difference to the environment.

Why we love to do what we do: 

We’re helping build the world of tomorrow with solutions that improve everyday life while making a positive impact on the planet. Our products and services work in harmony to provide comfort, efficiency, and peace of mind for millions of homes and businesses. While we’re proud of what we’ve done so far, there’s still a lot we can do—and you can be part of it.  

Join our extraordinary team. 

We're a rapidly growing global tech company headquartered in Canada, in the heart of downtown Toronto, with a satellite office in Leeds, UK (and remote ecopeeps in the US). We get to work with some of North America and UK's leading professionals. Our colleagues are proud to bring their authentic selves to work, confident that what we do is grounded in a greater purpose. We’re always looking for curious, talented, and passionate people to join our team.

This role is open to being 100% remote within Canada, although our home office is located in Toronto, Ontario. You’ll be required to travel to Toronto at a minimum once per quarter for team and/or company events.

Who You’ll be Joining: 

As the Director of Data Engineering, you’ll be joining our VP of Data Science and the greater Data Science Management team here at ecobee, as you lead your team of Data Engineers in building our next generation data platform.

You and your team will lead ecobee in the migration to a data-product culture and in doing so be responsible for defining and building a data-mesh architecture that governs and supports both new and existing data products across all our business domains.

How You’ll Make an Impact:

Your extensive knowledge in data engineering will go towards building an organizational data structure and system architecture that empowers our teams with the ability to intuitively build a cohesive data ecosystem, govern their own data and self-serve their ongoing needs with insights in real-time.

You’ll lead from the front, leveraging you and your team’s strong engineering capabilities to build the core data platform and champion new standards in data development, enabling your team and others to build data products that are discoverable, interoperable, addressable, and secure.

As a Director of Data Engineering at ecobee you will;

  • Foster a positive, supportive, and inclusive work environment.
  • Hire and develop a team of data engineers — providing them coaching, mentoring, motivation, and technical guidance.
  • Build high-quality, efficient, and scalable data infrastructure.
  • Bring big-picture thinking to our organizational data strategy, providing guidance to data, engineering, and product teams.
  • Partner with domain teams to migrate their existing data products to new data infrastructure and/or governance platform.
  • Lead data architecture design that reduces complexity and enables extendibility and reusability.
  • Continuously improve engineering practices — balancing speed, quality, and business impact.
  • Build effective agile practices that deliver robust solutions on time and on budget.
  • Lead the execution of project plans, delivery commitments, and risk mitigation.
  • Help evaluate the feasibility of initiatives through quick prototyping with respect to performance, quality, time, and cost.
  • Build strong partnerships with cross-functional teams to contribute to, and deliver unique customer experiences.
  • Thrive in a fast-paced, ambiguous, and high-stakes environment.

What You’ll Bring to the Table:

We've built the below list as a guideline for some of the skills and interests of our development team - but we strive to build our team with members from diverse backgrounds and skill sets, so if any combination of theseappliesto you we'd love to chat!

  • A strong background in Data Engineering and/or Data Architecture that extends out to modern cloud infrastructure and data-mesh/data-fabric principles.
  • Hands-on engineering experience, and the on-going willingness to engage in hands-on development.
  • The ability to champion projects, educate and inspire cross-functional teams and stakeholders (both technical and non-technical) using excellent verbal and written communication skills.
  • Experience managing engineering teams empathetically and effectively.
  • Experience with Agile and other program management methodologies.
  • Strategies that proactively identify upcoming risks, issues, and bottlenecks both within your team and across departmental boundaries.
  • A curious, analytical mentality with a bias towards taking action.
  • Experience with some of the following technologies: GCP, AWS, Big Query, Dataflow, Airflow, Matillion, SiSense, Terraform, Docker

Just so you know: The successful candidate will be required to complete a background check. 

What happens after you apply?

Application Review. It will happen. By an actual person in Talent Acquisition. We get upwards of 100+ application for some roles, it can take a few days, but every applicant can expect a note regarding their application status.

Interview Process (4 Rounds)

  • Round 1: A 45-minute phone call with a member of Talent Acquisition.
  • Round 2: A 1-hour virtual meeting with the VP of Data Science. This interview has a values and leadership focus.
  • Round 3: A 1-hour virtual meeting with a cross-functional team. This interview has a technical focus.
  • Round 4: A 1-hour virtual meeting with senior leaders from two teams you’ll work closely with in a cross-functional capacity.

With ecobee, you’ll have the opportunity to: 

  • Be part of something big: Get to work in a fresh, dynamic, and ever-growing industry.  
  • Make a difference for the environment: Make a sustainable impact while on your daily job, and after it through programs like ecobee acts. 
  • Expand your career: Learn with our in-house learning enablement team, and enjoy our generous professional learning budget. 
  • Put people first: Benefit from competitive salaries, health benefits, and a progressive Parental Top-Up Program (75% top-up or five bonus days off). 
  • Play a part on an exceptional culture: Enjoy a fun and casual workplace with an open concept office, located at Corus Quay.ecobeeLeeds is based at our riverside office on the Calls. 
  • Celebrate diversity: Be part of a truly welcoming workplace. We offer a mentorship program and bias training.  

Are you interested? Let's make it work. 

Our people are empowered to take ownership of their schedules with workflows that allow for flexible hours. Based on your job, you have an option of a office-based, fully remote, or hybrid work environment. New team members working remotely, will have all necessary equipment provided and shipped to them, and we conduct our interviews and onboarding sessions primarily through video.

We’re committed to inclusion and accommodation. 

ecobee believes that openness and diversity make us better. We welcome applicants from all backgrounds to apply regardless of race, gender, age, religion, identity, or any other aspect which makes them unique. Accommodations can be made upon request for candidates taking part in all aspects of the selection process. Our recruitment team is happy to answer any questions candidates may have about virtual interviewing, onboarding, and future work locations.

We’re up to incredible things. Come and be part of them. 

Discover our products and services and learn more about who we are.  

Ready to join ecobee? View current openings. 

Please note, ecobee does not accept unsolicited resumes.  

Apply for this job

+30d

Sr. Data Engineer - Data Analytics

R.S.ConsultantsPune, India, Remote
Bachelor's degreescalaairflowsqlDesigntypescriptpythonAWSNode.js

R.S.Consultants is hiring a Remote Sr. Data Engineer - Data Analytics

Job Description

We are looking for a Sr. Data Engineer for an International client. This is a 100% remote job. The person will be working from India and will be collaborating with global team. 

Total Experience: 7+ Years

Your role

  • Have key responsibilities within the requirements analysis, scalable & low latency streaming platform solution design, architecture, and end-to-end delivery of key modules in order to provide real-time data solutions for our product
  • Write clean scalable code using Go, Typescript / Node.js / Scala / python / SQL and test and deploy applications and systems
  • Solve our most challenging data problems, in real-time, utilizing optimal data architectures, frameworks, query techniques, sourcing from structured and unstructured data sources.
  • Be part of an engineering organization delivering high quality, secure, and scalable solutions to clients
  • Involvement in product and platform performance optimization and live site monitoring
  • Mentor team members through giving and receiving actionable feedback.

Our tech. stack:

  • AWS (Lambda, SQS, Kinesis, KDA, Redshift, Athena, DMS, Glue,Go/Typescript, Dynamodb), Airflow, Flink, Spark, Looker, EMR
  • A continuous deployment process based on GitLab

A little more about you:

  • A Bachelor's degree in a technical field (eg. computer science or mathematics). 
  • 3+ years experience with real-time, event driven architecture
  • 3+ years experience with a modern programming language such as Scala, Python, Go, Typescript
  • Experience of designing complex data processing pipeline
  • Experience of data modeling(star schema, dimensional modeling etc)
  • Experience of query optimisation
  • Experience of kafka is a plus
  • Shipping and maintaining code in production
  • You like sharing your ideas, and you're open-minded

Why join us?

???? Key moment to join in term of growth and opportunities

????‍♀️ Our people matter, work-life balance is important

???? Fast-learning environment, entrepreneurial and strong team spirit

???? 45+ Nationalities: cosmopolite & multi-cultural mindset

???? Competitive salary package & benefits (health coverage, lunch, commute, sport

DE&I Statement: 

We believe diversity, equity and inclusion, irrespective of origins, identity, background and orientations, are core to our journey. 

Qualifications

Hands-on experience in Scala / Python with Data Modeling, Real Time / Streaming Data. Experience of complex data processing pipeline and Data Modeling.

BE/ BTech in Computer Science

See more jobs at R.S.Consultants

Apply for this job

+30d

Sr Data Engineer

Science 37Raleigh, NC (Remote)
agileBachelor's degreejiranosqlsqlDesignmongodbc++postgresqlmysqllinuxjenkinspythonAWS

Science 37 is hiring a Remote Sr Data Engineer

This is a fully Remote and Work From Home (WFH) opportunity within the US

The Senior Data Engineer collaborates with motivated, energetic, and entrepreneurial individuals working together to achieve Science 37’s mission of changing the world of clinical research through patient-centered design. They have a hands-on role with the building and developing the data pipeline/platform that enables Science 37’s groundbreaking clinical research model and collaborates with Product, Data, Clinical Operations, and other relevant stakeholders to define study-specific platform requirements.

The Senior Data Engineer helps drive data democracy at Science. This position will work with our architects, software engineers, product managers, and DevOps to help design and build data solutions and architecture.  They will learn how Science 37 data is used and help make and drive the accessibility of the data that is needed, keeping in mind regulations and data privacy policies.  They will be able to use data processing libraries and tools to help the end users of our data get the insights they need.

DUTIES AND RESPONSIBILITIES

Duties include but are not limited to:

  1. Understand how to Install, configure, monitor and maintain databases in the production, development, testing environments
  2. Working with cloud vendors like AWS or GCP
  3. Working with cloud distributed file systems, data lakes, and data warehouses
  4. Creating a data pipelines to help with Internal and External analytics users
  5. Define and implement database schemas and configurations working with our development teams
  6. Optimize database performance by identifying and resolving application bottlenecks, tuning of DB queries, implementation of stored procedures, conducting performance tests, troubleshooting and integrating new elements
  7. Work with and lead development team design and implement reporting capabilities
  8. Implement solutions for database performance monitoring and tuning
  9. Recommend operational efficiencies, eliminate duplicate work efforts and remove unnecessary complexities; create and implement new procedures and workflows
  10. Process database change requests, including the creation and modification of databases, tables, views, stored procedures, triggers, jobs, etc. in accordance with change control policies
  11. Utilize an understanding of Agile management to help the team with all release and configuration related tasks around software builds into preproduction and production environments.

QUALIFICATIONS & SKILLS 

Qualifications

The following qualifications are preferred and/or equivalent applicable experience:

  1. Bachelor's degree in Computer Science or equivalent
  2. Knowledge architectural & database design skills
  3. Experience using SQL, NoSQL and Graph Databases
  4. Must have experience with AWS (other cloud providers are a plus)
  5. Scripting experience with Python or Bash required.
  6. Expertise with SQL.
  7. Have an understanding of data architecture.
  8. Deep Experience across different database platforms and tools such as MySQL, PostgreSQL, SQL Server, DynamoDB, MongoDB.
  9. Deep Experience designing and building data lake and data warehouse solutions
  10. Linux Server basic hands-on admin experience.
  11. Experience with Monitoring/Alert planning for data services.
  12. Experience with highly available database technologies like clustering, replication, mirroring, etc.
  13. Knowledge of administration, replication, backup and restore of relational databases
  14. Experience with data tools like Jupyter

Preferred Qualifications

  1. Experience in Clinical Trials and/or life science industry
  2. Experience with operational efficiency improvement initiatives
  3. Experience with CSV (Computer Systems Validation)
  4. Experience with JIRA, Confluence is a plus

Competencies

  1. Thrive in fast-paced, agile environments, and able to learn new areas quickly
  2. Broad knowledge of common infrastructure technologies such as web servers, load balancers, etc.
  3. Excellent troubleshooting skills and ability to understand complex relationships between components of multi-tiered and distributed applications.
  4. Solid understanding of load balancing and high volume, high availability environments
  5. Knowledge of SDLC and project management methodologies (JIRA experience is a plus)
  6. Able to analyze and review current functionality to determine potential areas of improvement and cost savings
  7. Ability to work independently with minimal guidance in a fast-paced environment
  8. Demonstrate excellent communication skills including the ability to effectively communicate with internal and external customers
  9. Strong work ethic with good time management with the ability to work with diverse teams and lead meetings
  10. Ability to work with all levels of the organization
  11. Experience using both SQL, NoSQL and Graph Databases
  12. Experienced in automation and automation tools such as Jenkins, Puppet, Chef, etc.
  13. Amazon: RDS, Aurora, Athena, DocumentDB, DynamoDB, Neptune,
  14. Snowflake
  15. Experience programming in Python.

REPORTING

The incumbent reports to the Manager, Data Engineering, who will also assign projects, provide general direction and guidance. Incumbent is expected to perform duties and responsibilities with minimal supervision.


Science 37 is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law.

Science 37 values the well-being of its employees and aims to provide team members with everything they need to succeed.

Submit your resume to apply!

To learn about Science 37's privacy practices including compliance with applicable privacy laws, please click here

See more jobs at Science 37

Apply for this job

+30d

Data Engineer

Zensark Tecnologies Pvt LtdHyderabad, India, Remote
nosqlpostgressqloracleDesignjavapythonAWS

Zensark Tecnologies Pvt Ltd is hiring a Remote Data Engineer

Job Description

Job Title:              Data Engineer

Department:      Product Development

Reports to:         Director, Software Engineering

 

Summary:

Responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. Support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. Responsible for optimizing or even re-designing Tangoe’s data architecture to support our next generation of products and data initiatives.

 

Responsibilities:

  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater performance and scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

 

Skills & Qualifications:

  • 5+ years of experience in a Data Engineer role
  • Experience with relational SQL and NoSQL databases, including Postgres, Oracle and Cassandra.
  • Experience with data pipeline and workflow management tools.
  • Experience with AWS cloud services: S3, EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming, Amazon Kinesis, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, NodeJs.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with both structured and unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.

 

 

Education:

  • Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

 

Working conditions: 

  • Remote

 

Tangoe reaffirms its commitment to providing equal opportunities for employment and advancement to qualified employees and applicants. Individuals will be considered for positions for which they meet the minimum qualifications and are able to perform without regard to race, color, gender, age, religion, disability, national origin, veteran status, sexual orientation, gender identity, current unemployment status, or any other basis protected by federal, state or local laws. Tangoe is an Equal Opportunity Employer -Minority/Female/Disability/Veteran/Current Unemployment Status.

 

Qualifications

  • Bachelor’s degree in Computer Science, Engineering or a related subject

See more jobs at Zensark Tecnologies Pvt Ltd

Apply for this job

+30d

Senior Data Engineer

seedtagMadrid, ES Remote
scalaairflowsqlmongodbkuberneteslinuxpython

seedtag is hiring a Remote Senior Data Engineer

We are looking for a talented Senior Data Engineerto help us change the world of digital advertising together.

WHO WE ARE

At Seedtag our goal is to lead the change in the advertising industry, because we believe that effective advertising should not be at odds with users’ privacy.

By combining Natural Language Processing and Computer Vision our proprietary, Machine Learning-based technology provides a human-like understanding of the content of the web that finds the best context for each ad while providing unparalleled risk-mitigation capabilities that protect advertisers from showing their ads on pages that could be damaging for their brand. All of this, without relying on cookies or any other tracking mechanisms.

Every day, our teams develop new services that reach over 200 million users worldwide with fast response times to ensure that we deliver the best user experience. We’re fully committed to the DevOps culture, where we provide the platform that our Software Developers and Data Scientists use to manage over 100 different microservices, pushing dozens of changes to production every day. All of this is built on top of Kubernetes in Google Cloud Platform and Amazon Web Services.

If you are interested in joining one of the fastest growing startups in Europe and work on massive scalability challenges, this is the place for you.

KEY FIGURES

2014 · Founded by two ex-Googlers

2018 · 16M total turnover & Internationalization & Getting growth

2021 · Fundraising round of 40M€ & +10 countries & +230 Seedtaggers

2022 ·Fundraising round of 250M€ + expansion into the U.S market

ABOUT YOU

Your key responsibilities will be:

  • You will be a key player in the development of a reliable data architecture for ingestion, processing, and surfacing of data for large-scale applications
  • You will cooperate with other teams to unify data sources, as well as recommend and implement ways to improve data reliability, quality and integrity.
  • You will start by processing data from different sources using tools such as SQL, MongoDB, and Apache Beam, and will be exploring and proposing new methods and tools to acquire new data.
  • You will work with data science and data analytics teams, to help them improve their processes by building new tools and implementing best practices
  • You will ensure continuous improvement in delivery, applying engineering best practices to development, monitoring, and data quality of the data pipelines.

We're looking for someone who:

  • You have at least 5 years of solid experience in Data Engineering
  • You have a degree in Computer Science, Engineering, Statistics, Mathematics, Physics or another degree with a strong quantitative component.
  • You are comfortable with object-oriented languages, such as Python or Scala, and you are fluent in working with a Linux terminal and writing basic bash scripts.
  • You have ample experience with Data Engineering tools such as Apache Beam, Spark, Flink or Kafka.
  • You have experience orchestrating ETL processes using systems such as Apache Airflow, and managing databases like SQL, Hive or MongoDB.
  • You are a proactive person who likes the dynamic startup work culture

WHAT WE OFFER

  • ???? Key moment to join Seedtag in terms of growth and opportunities
  • ???? High-performance tier salary bands excellent compensation
  • ???? One Seedtag: Work for a month from any of our open offices with travel and stay paid if you’re a top performer (think of Brazil, Mexico..., ????️)
  • ???? Paid travels to our HQ in Madrid to work p2p with your squad members
  • ???? Macbook Pro M1
  • ???? Build your home office with a budget of up to 1K€ (external screen, chair, table...)
  • ⌛ Flexible schedule to balance work and personal life
  • ⛰️ An unlimited remote working environment, where you can choose to work from home indefinitely or attend our Madrid headquarters whenever you want, where you will find a great workplace location with food, snacks, great coffee, and much more.
  • ????️ A harassment-free, supportive and safe environment to ensure the healthiest and friendliest professional experience fostering diversity at all levels.
  • ???????? ???????? Optional company-paid English and/or Spanish courses.
  • ???? Access to learning opportunities (learning & development budget)
  • ???? We love what we do, but we also love having fun. We have many team activities you can join and enjoy with your colleagues! A Yearly offsite with all the company, team offsites, and Christmas events...
  • ????️???? Access to a flexible benefits plan with restaurant, transportation, and kindergarten tickets and discounts on medical insurance

Are you ready to join the Seedtag adventure? Then send us your CV!

See more jobs at seedtag

Apply for this job

+30d

Principal Data Engineer

agilesqlc++AWS

Blueprint Technologies is hiring a Remote Principal Data Engineer

Principal Date Engineer Remote

Who is Blueprint?

We are a technology solutions firm headquartered in Bellevue, Washington, with a strong presence across the United States. Unified by a shared passion for solving complicated problems, our people are our greatest asset. We use technology as a tool to bridge the gap between strategy and execution, powered by the knowledge, skills, and the expertise of our teams, who all have unique perspectives and years of experience across multiple industries. We’re bold, smart, agile, and fun.

What does Blueprint do?

Blueprint helps organizations unlock value from existing assets by leveraging cutting-edge technology to create additional revenue streams and new lines of business. We connect strategy, business solutions, products, and services to transform and grow companies.

Why Blueprint?

At Blueprint, we believe in the power of possibility and are passionate about bringing it to life. Whether you join our bustling product division, our multifaceted services team or you want to grow your career in human resources, your ability to make an impact is amplified when you join one of our teams. You’ll focus on solving unique business problems while gaining hands-on experience with the world’s best technology. We believe in unique perspectives and build teams of people with diverse skillsets and backgrounds. At Blueprint, you’ll have the opportunity to work with multiple clients and teams, such as data science and product development, all while learning, growing, and developing new solutions. We guarantee you won’t find a better place to work and thrive than at Blueprint.

What will I be doing?

Blueprint is looking for a Principal Data Engineer to join us as we build cutting-edge technology solutions!  The ideal candidate will have a solid background in consulting, with demonstrated experience leading clients through the process of building modern data estates. As a Principal Data Engineer, you will spend a majority of your time working directly with clients to develop their advanced modern data estates, warehouses, and analytical environments. You will also be responsible for overseeing and mentoring junior developers within the organization. 

Responsibilities:

  • Develop and implement effective data architecture solutions using Databricks and Lakehouse
  • Optimize and tune data pipelines for performance and scalability
  • Monitor and troubleshoot data pipelines to ensure data availability and reliability
  • Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and build solutions that enable them to extract insights from data
  • Implement best practices for data governance, data security, and data quality to ensure data integrity across all data sources
  • Create and maintain documentation related to data architecture, data pipelines, and data models
  • Stay up to date with emerging technologies and best practices in data engineering and big data processing
  • Mentor and train other data engineers on best practices for data engineering and Databricks usage
  • Provide thought leadership in the Databricks and Lakehouse space, both within the organization and externally

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field
  • 8+ years of experience in data engineering
  • 3+ years of experience working with Databricks and PySpark
  • 6-8+ years of experience with SQL
  • Appreciation for the Lakehouse medallion data architecture – bronze, silver, gold – and how those data stages are used
  • Working knowledge of DLT(Delta Live Tables) and Unity Catalog a plus
  • Strong understanding of ETL and ELT data ingestion, acquisition, and data processing patterns
  • Experience with cloud-based data warehousing platforms such as Synapse, AWS Redshift, Google BigQuery, or Snowflake
  • Strong understanding of data engineering, data warehousing, data modeling, data governance, and data security best practices
  • Excellent problem-solving and troubleshooting skills
  • Strong communication and collaboration skills, with the ability to work effectively in a team environment
  • Experience mentoring and training other data engineers

Salary Range

Pay ranges vary based on multiple factors including, without limitation, skill sets, education, responsibilities, experience, and geographical market. The pay range for this position reflects geographic based ranges for Washington state: $127,000 to $211,600 USD/annually. The salary/wage and job title for this opening will be based on the selected candidate’s qualifications and experience and may be outside this range.

 

Equal Opportunity Employer

Blueprint Technologies, LLC is an equal employment opportunity employer. Qualified applicants are considered without regard to race, color, age, disability, sex, gender identity or expression, orientation, veteran/military status, religion, national origin, ancestry, marital, or familial status, genetic information, citizenship, or any other status protected by law.

If you need assistance or a reasonable accommodation to complete the application process, please reach out to: recruiting@bpcs.com

Blueprint believe in the importance of a healthy and happy team, which is why our comprehensive benefits package includes:

  • Medical, dental, and vision coverage
  • Flexible Spending Account
  • 401k program
  • Competitive PTO offerings
  • Parental Leave
  • Opportunities for professional growth and development

Location: Remote

 

See more jobs at Blueprint Technologies

Apply for this job

+30d

Senior Data Engineer

AxiosRemote
agileterraformairflowsqlDesignc++pythonAWS

Axios is hiring a Remote Senior Data Engineer

Quick take: Axios is a growth-stage company dedicated to providing trustworthy, award-winning news content in an audience-first format. We’re hiring a remote Senior Data Engineer to join our Consumer Insights data team! 

Why it matters:As a Senior Data Engineer, this person will collaborate with other data engineers, scientists, analysts, and product managers to drive forward data initiatives across mission-critical Axios products. The team is responsible for analyzing consumer behavior, preferences, and feedback, to allow Axios to tailor products, services, and marketing strategies effectively.

Go deeper:As a Senior Data Engineer, you will play a leadership role in building and delivering solutions to problems in an intelligent and nuanced way. In this role, you will make an impact on Axios through the following responsibilities:

  • Architect and build data products and features that provide consumer insights about Axios’ audience
  • Hands-on development and execution against the team’s roadmap in collaboration with other data engineers, analysts, scientists, and quality engineers. 
  • Technical and architectural decision-making
  • Develop and maintain data pipelines and warehouses to support Axios in data-informed decision-making 
  • Writing clean, well-documented, and well-tested code primarily in SQL/Python
  • Provide technical insights, and feasibility assessments, communicate technical constraints to the team’s Product Manager
  • Estimate efforts of technical implementation to aid in planning and sequencing of developmental tasks
  • Mentoring less experienced members of the team through pair programming and empathetic code review
  • Share knowledge through presenting at data chapter meetings and demoing to team members and stakeholders
  • Staying up to date with industry trends and collaborating on best practices

The details: Ideal candidate should have an entrepreneurial spirit, be highly collaborative, exhibit a passion for building technology products, and have the following qualifications:

  • Experience with or knowledge of Agile Software Development methodologies
  • Experience building data applications in languages such as (but not limited to) Python, SQL, Bash, Jinja, Terraform 
  • Experience designing, building, and maintaining data pipelines to produce insights
  • Experience with functional design and dimensional data modeling
  • Experience with DBT and semantic data models
  • Experience with data pipeline development and data orchestration systems such as Airflow
  • Practical experience with columnar data warehouses, such as Redshift
  • Experience working with CI/CD pipelines and understanding best deployment practices for data products 
  • Proven ability to ship high-quality, testable, and accessible code quickly
  • Experience working in and around cloud providers such as AWS

 Bonus experiences:

  • Experience working in and around AWS data services
  • Experience working with Data Scientists, Machine Learning Engineers or supporting MLOps 
  • Experience working with MapReduce and Spark clusters
  • Experience successfully working with data product managers
  • Experience working in Media 

Don’t forget:

  • Competitive salary
  • Health insurance (100% paid for individuals, 75% for families)
  • Primary caregiver 12-week paid leave
  • 401K
  • Generous vacation policy, plus company holidays
  • A commitment to an open, inclusive, and diverse work culture
  • Annual learning and development stipend

Additional pandemic-related benefits:

  • One mental health day per quarter
  • $100 monthly work-from-home stipend
  • Company-sponsored access to Ginger coaching and mental health support 
  • OneMedical membership, including tele-health services 
  • Increased work flexibility for parents and caretakers 
  • Access to the Axios “Family Fund”, which was created to allow employees to request financial support when facing financial hardship or emergencies 
  • Class pass discount
  • Virtual company-sponsored social events

Starting salary for this role is in the range of $140,000 - $190,000 and is dependent on numerous factors, including but not limited to location, work experience, and skills. This range does not include other compensation benefits.

Equal Opportunity Employer Statement

Axios is an equal opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, age, gender identity, gender expression, veteran status, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws.

This policy applies to all employment practices within our organization, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. Axios makes hiring decisions based solely on qualifications, merit, and business needs at the time.

See more jobs at Axios

Apply for this job

+30d

Senior Data Engineer

Signify HealthDallas, TX or Remote
terraformsqlRabbitMQDesignmobileazurescrumqagitjavac++.netangularAWSfrontend

Signify Health is hiring a Remote Senior Data Engineer

A Senior Software Engineer - Datadevelops systems to manage data flow throughout Signify Health’s infrastructure. This involves all elements of data engineering, such as ingestion, transformation, and distribution of data.

What will you do?

  • Communicate with business leaders to help translate requirements into functional specification
  • Develop broad understanding of business logic and functionality of current systems
  • Analyze and manipulate data by writing and running SQL queries
  • Analyze logs to identify and prevent potential issues from occurring
  • Deliver clean and functional code in accordance with business requirements
  • Consume data from any source, such a flat files, streaming systems, or RESTful APIs    
  • Interface with Electronic Health Records
  • Engineer scalable, reliable, and performant systems to manage data
  • Collaborate closely with other Engineers, QA, Scrum master, Product Manager in your team as well as across the organization
  • Build quality systems while expanding offerings to dependent teams
  • Comfortable in multiple roles, from Design and Development to Code Deployment to and monitoring and investigating in production systems.

Requirements

  • Bachelors in Computer Science or equivalent
  • Proven ability to complete projects in a timely manner while clearly measuring progress
  • Strong software engineering fundamentals (data structures, algorithms, async programming patterns, object-oriented design, parallel programming) 
  • Strong understanding and demonstrated experience with at least one popular programming language (.NET or Java) and SQL constructs.
  • Experience writing and maintaining frontend client applications, Angular preferred
  • Strong experience with revision control (Git)
  • Experience with cloud-based systems (Azure / AWS / GCP).
  • High level understanding of big data design (data lake, data mesh, data warehouse) and data normalization patterns
  • Demonstrated experience with Queuing technologies (Kafka / SNS / RabbitMQ etc)
  • Demonstrated experience with Metrics, Logging, Monitoring and Alerting tools
  • Strong communication skills
  • Strong experience with use of RESTful APIs
  • High level understanding of HL7 V2.x / FHIR based interface messages.
  • High level understanding of system deployment tasks and technologies. (CI/CD Pipeline, K8s, Terraform)

The base salary hiring range for this position is $100,000 to $175,000. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits.

In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  Eligible employees may enroll in a full range of medical, dental, and vision benefits, 401(k) retirement savings plan, and an Employee Stock Purchase Plan.  We also offer education assistance, free development courses, paid time off programs, paid holidays, a CVS store discount, and discount programs with participating partners.  

About Us:
Signify Health is helping build the healthcare system we all want to experience by transforming the home into the healthcare hub. We coordinate care holistically across individuals’ clinical, social, and behavioral needs so they can enjoy more healthy days at home. By building strong connections to primary care providers and community resources, we’re able to close critical care and social gaps, as well as manage risk for individuals who need help the most. This leads to better outcomes and a better experience for everyone involved.
Our high-performance networks are powered by more than 9,000 mobile doctors and nurses covering every county in the U.S., 3,500 healthcare providers and facilities in value-based arrangements, and hundreds of community-based organizations. Signify’s intelligent technology and decision-support services enable these resources to radically simplify care coordination for more than 1.5 million individuals each year while helping payers and providers more effectively implement value-based care programs.
To learn more about how we’re driving outcomes and making healthcare work better, please visit us at www.signifyhealth.com

Diversity and Inclusion are core values at Signify Health, and fostering a workplace culture reflective of that is critical to our continued success as an organization.

We are committed to equal employment opportunities for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.

See more jobs at Signify Health

Apply for this job