airflow Remote Jobs

103 Results

+30d

Databricks Data Engineer - Data & Analytics team (remote / Costa Rica- or LATAM-based)

HitachiSan Jose, Costa Rica, Remote
scalaairflowsqlDesignazuregitpythonAWS

Hitachi is hiring a Remote Databricks Data Engineer - Data & Analytics team (remote / Costa Rica- or LATAM-based)

Job Description

 

Please note: Although our position is primarily remote / virtual (could be some occasional onsite in downtown San Jose, should you live close enough) you MUST live, and be authorized to work, in Costa Rica without sponsorship. Candidates in other Latin America (LATAM) countries can be considered as an employee if willing to relocate to Costa Rica or can work via our 3rd party payroll company.

 

DATA ENGINEER (DATABRICKS, PYTHON, SPARK) 

This is a full-time, well benefited, career opportunity in our Data & Analytics organization (Azure DataWarehouse / DataLakehouse and Business Intelligence) for a highly experienced Data Engineer in Big Data systems design with hnads-on knowledge in data architecture, especially Spark and Delta/Data Lake technologies.

Individuals in this role will assist in the design, development, enhancement, and maintenance of complex data pipelines products that manage business critical operations, and large-scale analytics pipelines.   Qualified applicants will have a demonstrated capability to learn new concepts quickly, have a data engineering background, and/or have robust software engineering expertise.  

Responsibilities

  • Scope and execute together with team leadership. Work with the team to understand platform capabilities and how to best improve and expand those capabilities.
  • Strong independence and autonomy.
  • Design, development, enhancement, and maintenance of complex data pipeline products which manage business-critical operations and large-scale analytics applications.
  • Experience leading mid- and senior-level data engineers. 
  • Support analytics, data science and/or engineering teams and understand their unique needs and challenges. 
  • Instill excellence into the processes, methodologies, standards, and technology choices embraced by the team.
  • Embrace new concepts quickly to keep up with fast-moving data engineering technology.
  • Dedicate time to continuous learning to keep the team appraised of the latest developments in the space.
  • Commitment to developing technical maturity across the company.

Qualifications

  • 5+ years of Data Engineering experience including 2+ years designing and building Databricks data pipelines is REQUIRED; Azure cloud is highly preferred, however will consider AWS, GCP or other cloud platform experience in lieu of Azure
  • Experience with conceptual, logical and/or physical database designs is a plus
  • 2+ years of hands-on Python/Pyspark/SparkSQL and/or Scala experience is REQUIRED
  • 2+ years of experience with Big Data pipelines or DAG Tools (Data Factory, Airflow, dbt, or similar) is REQUIRED
  • 2+ years of Spark experience (especially Databricks Spark and Delta Lake) is REQUIRED
  • 2+ years of hands-on experience implementing Big Data solutions in a cloud ecosystem, including Data/Delta Lakes, is REQUIRED
  • Experience with source control (git) on the command line is REQUIRED
  • 2+ years of SQL experience, specifically to write complex, highly optimized queries across large volumes of data is HIGHLY DESIRED
  • Data modeling / data profiling capabilities with Kimball/star schema methodology is a plus
  • Professional experience with Kafka, or other live data streaming technology, is HIGHLY DESIRED
  • Professional experience with database deployment pipelines (i.e., dacpac’s or similar technology) is HIGHLY DESIRED
  • Professional experience with one or more unit testing or data quality frameworks is HIGHLY DESIRED

#LI-CA1

#REMOTE

#databricks

#python

#spark

#dataengineer

#datawrangler

Apply for this job

+30d

Senior Data Engineer

seedtagMadrid, ES Remote
scalaairflowsqlmongodbkuberneteslinuxpython

seedtag is hiring a Remote Senior Data Engineer

We are looking for a talented Senior Data Engineerto help us change the world of digital advertising together.

WHO WE ARE

At Seedtag our goal is to lead the change in the advertising industry, because we believe that effective advertising should not be at odds with users’ privacy.

By combining Natural Language Processing and Computer Vision our proprietary, Machine Learning-based technology provides a human-like understanding of the content of the web that finds the best context for each ad while providing unparalleled risk-mitigation capabilities that protect advertisers from showing their ads on pages that could be damaging for their brand. All of this, without relying on cookies or any other tracking mechanisms.

Every day, our teams develop new services that reach over 200 million users worldwide with fast response times to ensure that we deliver the best user experience. We’re fully committed to the DevOps culture, where we provide the platform that our Software Developers and Data Scientists use to manage over 100 different microservices, pushing dozens of changes to production every day. All of this is built on top of Kubernetes in Google Cloud Platform and Amazon Web Services.

If you are interested in joining one of the fastest growing startups in Europe and work on massive scalability challenges, this is the place for you.

KEY FIGURES

2014 · Founded by two ex-Googlers

2018 · 16M total turnover & Internationalization & Getting growth

2021 · Fundraising round of 40M€ & +10 countries & +230 Seedtaggers

2022 ·Fundraising round of 250M€ + expansion into the U.S market

ABOUT YOU

Your key responsibilities will be:

  • You will be a key player in the development of a reliable data architecture for ingestion, processing, and surfacing of data for large-scale applications
  • You will cooperate with other teams to unify data sources, as well as recommend and implement ways to improve data reliability, quality and integrity.
  • You will start by processing data from different sources using tools such as SQL, MongoDB, and Apache Beam, and will be exploring and proposing new methods and tools to acquire new data.
  • You will work with data science and data analytics teams, to help them improve their processes by building new tools and implementing best practices
  • You will ensure continuous improvement in delivery, applying engineering best practices to development, monitoring, and data quality of the data pipelines.

We're looking for someone who:

  • You have at least 5 years of solid experience in Data Engineering
  • You have a degree in Computer Science, Engineering, Statistics, Mathematics, Physics or another degree with a strong quantitative component.
  • You are comfortable with object-oriented languages, such as Python or Scala, and you are fluent in working with a Linux terminal and writing basic bash scripts.
  • You have ample experience with Data Engineering tools such as Apache Beam, Spark, Flink or Kafka.
  • You have experience orchestrating ETL processes using systems such as Apache Airflow, and managing databases like SQL, Hive or MongoDB.
  • You are a proactive person who likes the dynamic startup work culture

WHAT WE OFFER

  • ???? Key moment to join Seedtag in terms of growth and opportunities
  • ???? High-performance tier salary bands excellent compensation
  • ???? One Seedtag: Work for a month from any of our open offices with travel and stay paid if you’re a top performer (think of Brazil, Mexico..., ????️)
  • ???? Paid travels to our HQ in Madrid to work p2p with your squad members
  • ???? Macbook Pro M1
  • ???? Build your home office with a budget of up to 1K€ (external screen, chair, table...)
  • ⌛ Flexible schedule to balance work and personal life
  • ⛰️ An unlimited remote working environment, where you can choose to work from home indefinitely or attend our Madrid headquarters whenever you want, where you will find a great workplace location with food, snacks, great coffee, and much more.
  • ????️ A harassment-free, supportive and safe environment to ensure the healthiest and friendliest professional experience fostering diversity at all levels.
  • ???????? ???????? Optional company-paid English and/or Spanish courses.
  • ???? Access to learning opportunities (learning & development budget)
  • ???? We love what we do, but we also love having fun. We have many team activities you can join and enjoy with your colleagues! A Yearly offsite with all the company, team offsites, and Christmas events...
  • ????️???? Access to a flexible benefits plan with restaurant, transportation, and kindergarten tickets and discounts on medical insurance

Are you ready to join the Seedtag adventure? Then send us your CV!

See more jobs at seedtag

Apply for this job

+30d

Senior Data Engineer

AxiosRemote
agileterraformairflowsqlDesignc++pythonAWS

Axios is hiring a Remote Senior Data Engineer

Quick take: Axios is a growth-stage company dedicated to providing trustworthy, award-winning news content in an audience-first format. We’re hiring a remote Senior Data Engineer to join our Consumer Insights data team! 

Why it matters:As a Senior Data Engineer, this person will collaborate with other data engineers, scientists, analysts, and product managers to drive forward data initiatives across mission-critical Axios products. The team is responsible for analyzing consumer behavior, preferences, and feedback, to allow Axios to tailor products, services, and marketing strategies effectively.

Go deeper:As a Senior Data Engineer, you will play a leadership role in building and delivering solutions to problems in an intelligent and nuanced way. In this role, you will make an impact on Axios through the following responsibilities:

  • Architect and build data products and features that provide consumer insights about Axios’ audience
  • Hands-on development and execution against the team’s roadmap in collaboration with other data engineers, analysts, scientists, and quality engineers. 
  • Technical and architectural decision-making
  • Develop and maintain data pipelines and warehouses to support Axios in data-informed decision-making 
  • Writing clean, well-documented, and well-tested code primarily in SQL/Python
  • Provide technical insights, and feasibility assessments, communicate technical constraints to the team’s Product Manager
  • Estimate efforts of technical implementation to aid in planning and sequencing of developmental tasks
  • Mentoring less experienced members of the team through pair programming and empathetic code review
  • Share knowledge through presenting at data chapter meetings and demoing to team members and stakeholders
  • Staying up to date with industry trends and collaborating on best practices

The details: Ideal candidate should have an entrepreneurial spirit, be highly collaborative, exhibit a passion for building technology products, and have the following qualifications:

  • Experience with or knowledge of Agile Software Development methodologies
  • Experience building data applications in languages such as (but not limited to) Python, SQL, Bash, Jinja, Terraform 
  • Experience designing, building, and maintaining data pipelines to produce insights
  • Experience with functional design and dimensional data modeling
  • Experience with DBT and semantic data models
  • Experience with data pipeline development and data orchestration systems such as Airflow
  • Practical experience with columnar data warehouses, such as Redshift
  • Experience working with CI/CD pipelines and understanding best deployment practices for data products 
  • Proven ability to ship high-quality, testable, and accessible code quickly
  • Experience working in and around cloud providers such as AWS

 Bonus experiences:

  • Experience working in and around AWS data services
  • Experience working with Data Scientists, Machine Learning Engineers or supporting MLOps 
  • Experience working with MapReduce and Spark clusters
  • Experience successfully working with data product managers
  • Experience working in Media 

Don’t forget:

  • Competitive salary
  • Health insurance (100% paid for individuals, 75% for families)
  • Primary caregiver 12-week paid leave
  • 401K
  • Generous vacation policy, plus company holidays
  • A commitment to an open, inclusive, and diverse work culture
  • Annual learning and development stipend

Additional pandemic-related benefits:

  • One mental health day per quarter
  • $100 monthly work-from-home stipend
  • Company-sponsored access to Ginger coaching and mental health support 
  • OneMedical membership, including tele-health services 
  • Increased work flexibility for parents and caretakers 
  • Access to the Axios “Family Fund”, which was created to allow employees to request financial support when facing financial hardship or emergencies 
  • Class pass discount
  • Virtual company-sponsored social events

Starting salary for this role is in the range of $140,000 - $190,000 and is dependent on numerous factors, including but not limited to location, work experience, and skills. This range does not include other compensation benefits.

Equal Opportunity Employer Statement

Axios is an equal opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, age, gender identity, gender expression, veteran status, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws.

This policy applies to all employment practices within our organization, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. Axios makes hiring decisions based solely on qualifications, merit, and business needs at the time.

See more jobs at Axios

Apply for this job