airflow Remote Jobs

102 Results

26d

Strong Junior Product Analyst at HolyWater

GenesisУкраїна Remote
tableauairflowsqlB2CFirebasepythonAWS

Genesis is hiring a Remote Strong Junior Product Analyst at HolyWater

ПІДТРИМУЄМО УКРАЇНУ ????????

Holy Water засуджує війну росії проти України й допомагає державі. На початку повномасштабної війни ми запустили продаж NFT-колекції про події в Україні, щоб зібрати 1 млн доларів на потреби української армії, а також долучилися до корпоративного благодійного фонду Genesis for Ukraine. Команда фонду закуповує необхідне спорядження, техніку й медикаменти для співробітників та їхніх родичів, що захищають країну на передовій, крім того, ми постійно донатимо на ЗСУ.

ЗУСТРІЧАЙТЕ СВОЮ МАЙБУТНЮ КОМАНДУ!

Ви будете працювати в Holy Water — це стартап в сфері ContentTech, який займається створенням та паблішингом книжок, аудіокнижок, інтерактивних історій та відео серіалів. Ми будуємо синергію між ефективністю AI та креативністю письменників, допомагаючи їм надихати своїм контентом десятки мільйонів користувачів у всьому світі.

HolyWater була заснована в 2020 році в екосистемі Genesis. З того часу команда зросла з 6 до 90 спеціалістів, а наші додатки неодноразово ставали лідерами в своїх категоріях в США, Австралії, Канаді та Європі.

За допомогою нашої платформи, ми даємо можливість будь-якому талановитому письменнику вийти на мільйону аудиторію користувачів наших додатків та надихати їх своїм історіями. Нашими продуктами користуються вже більше 10 мільйонів користувачів по всьому світу.

НАШІ ДОСЯГНЕННЯ ЗА 2023:

1. Наш додакток з інтерактивними історіями 3 місяці ставав топ 1 за завантаженнями у світі у своїй ніші.
2. Наш додаток з книжками, Passion, в грудні став топ 1 в своїй ніші в США та Європі.
3. Ми запустили платформу з відео серіалами на основі наших книжок та зробили перший успішний пілотний серіал.
4. Кількість нових завантажень та виручка зросли майже в 2 рази в порівнянні з 2022.

Основна цінність HolyWater - це люди, які працюють з нами. Саме тому ми прикладаємо всі зусилля, щоб створити такі умови, де кожен співробітник зможе реалізувати свій потенціал наповну та досягнути найамбітніших цілей.

КУЛЬТУРА КОМПАНІЇ

У своїй роботі команда спирається на шість ключових цінностей: постійне зростання, внутрішня мотивація, завзятість і гнучкість, усвідомленість, свобода та відповідальність, орієнтація на результат.

Зараз команда шукає Strong Junior Product Analyst, котрий стане новим гравцем команди аналітиків.

ВАШІ ОБОВ'ЯЗКИ ВКЛЮЧАТИМУТЬ:

  • Генерацію гіпотез росту та запуск A/B тестів разом з продуктовою командою.
  • Підтримку аналітичних процесів під час проведення A/B-тестувань для оптимізації продуктових рішень.
  • Пошук точок зростання в продукті та маркетингу.
  • Взаємодію з продакт менеджерами, розробниками та маркетологами для безпосереднього впливу на продукт.
  • Автоматизацію процесів підготовки звітів для ефективного моніторингу показників.

ЩО ПОТРІБНО, АБИ ПРИЄДНАТИСЯ:

  • Досвід роботи на посаді Data Analyst / Scientist від 1-го року.
  • Досвід роботи з column-oriented storages (BigQuery, AWS Athena, etc.).
  • Навички роботи з SQL на професійному рівні.
  • Досвід розробки та візуалізації даних техніками BI (Tableau).
  • Досвід роботи з Amplitude, Firebase, AppsFlyer.
  • Відповідальність та проактивність.
  • Проєктне та логічне мислення.

БУДЕ ПЛЮСОМ:

  • Розуміння основ Python для аналітики.
  • Досвід роботи з Google Cloud Platform.
  • Досвід роботи з B2C мобільними застосунками.

ЩО МИ ПРОПОНУЄМО:

  • Ви будете частиною згуртованої команди професіоналів, де зможете обмінюватися знаннями та досвідом, а також отримувати підтримку та поради від колег.
  • Гнучкий графік роботи, можливість працювати віддалено з будь-якої безпечної точки світу.
  • Можливість відвідувати офіс на київському Подолі. В офісах можна не турбуватися про рутину: тут на вас чекають сніданки, обіди, безліч снеків та фруктів, лаунжзони, масаж та інші переваги ????
  • 20 робочих днів оплачуваної відпустки на рік, необмежена кількість лікарняних.
  • Медичне страхування.
  • Є можливість звернутися за консультацією до психолога.
  • Уся необхідна для роботи техніка.
  • У компанії ми активно застосовуємо сучасні інструменти та технології, такі як BigQuery, Tableau, Airflow, Airbyte і DBT. Це дасть вам можливість працювати з передовими інструментами та розширити свої навички в галузі аналітики.
  • Онлайн-бібліотека, регулярні лекції від спікерів топрівня, компенсація конференцій, тренінгів та семінарів.
  • Професійне внутрішнє ком’юніті для вашого кар’єрного розвитку.
  • Культура відкритого фідбеку.

ЕТАПИ ВІДБОРУ:

1. Первинний скринінг. Рекрутер ставить декілька запитань (телефоном або в месенджері), аби скласти враження про ваш досвід і навички перед співбесідою.
2. Тестове завдання.
Підтверджує вашу експертизу та показує, які підходи, інструменти й рішення ви застосовуєте в роботі. Ми не обмежуємо вас у часі та ніколи не використовуємо напрацювання кандидатів без відповідних домовленостей.
3. Співбесіда з менеджером.
Всеохопна розмова про ваші професійні компетенції та роботу команди, в яку подаєтесь.
4. Бар-рейзинг.
На останню співбесіду ми запрошуємо одного з топменеджерів екосистеми Genesis, який не працюватиме напряму з кандидатом. У фокусі бар-рейзера — ваші софт-скіли та цінності, аби зрозуміти, наскільки швидко ви зможете зростати разом з компанією.


Якщо ви готові прийняти виклик і приєднатися до нашої команди, то чекаємо на ваше резюме!

    See more jobs at Genesis

    Apply for this job

    26d

    Especialista Cientista de dados

    ExperianBrasília, Brazil, Remote
    nosqlairflowgitdockerlinuxAWS

    Experian is hiring a Remote Especialista Cientista de dados

    Job Description

    Área: DA-Regions
    Subárea: Governance & Risk Management

    Nós construímos soluções para melhoria de nossos produtos através das técnicas de análise de dados, descoberta de conhecimento, business intelligence e modelos de machine learning para a prevenção de fraudes dos nossos produtos.

    Quais serão suas principais entregas?

    • Criação e manutenção de pipeline de dados para o Lakehouse;
    • Análise e visualização de dados para Business Intelligence;
    • Criação e implantação de modelos de Machine Learning;
    • Processamento distribuído utilizando ferramentas de Big Data.

      Qualifications

      O que estamos buscando em você!

      • Experiência de programação, principalmento com Pyhton;
      • Conhecimento em estrutura de dados, algoritmos, orientação à objetos e padrões de projeto;
      • Experiência com bancos NoSQL;
      • Experiência com ferramentas de processamento de streaming;
      • Experiência com Airflow ou outra ferramenta de workflow;
      • Experiência com AWS (EMR, SageMaker, Athena, Glue, S3);
      • Experiência com Spark;
      • Experiência com TensorFlow;
      • Experiência com Linux, Git e Docker;
      • Inglês intermediário.

       

            See more jobs at Experian

            Apply for this job

            28d

            Senior Data Engineer

            airflowpostgressqloracleDesigndockermysqlkubernetespythonAWS

            ReCharge Payments is hiring a Remote Senior Data Engineer

            Who we are

            In a world where acquisition costs are skyrocketing, funding is scarce, and ecommerce merchants are forced to do more with less, the most innovative DTC brands understand that subscription strategy is business strategy.

            Recharge is simplifying retention and growth for innovative ecommerce brands. As the #1 subscription platform, Recharge is dedicated to empowering brands to easily set up and manage subscriptions, create dynamic experiences at every customer touchpoint, and continuously evaluate business performance. Powering everything from no-code customer portals, personalized offers, and customizable bundles, Recharge helps merchants seamlessly manage, grow, and delight their subscribers while reducing operating costs and churn. Today, Recharge powers more than 20,000 merchants serving 90 million subscribers, including brands such as Blueland, Hello Bello, CrunchLabs, Verve Coffee Roasters, and Bobbie—Recharge doesn’t just help you sell products, we help build buyer routines that last.

            Recharge is recognized on the Technology Fast 500, awarded by Deloitte, (3rd consecutive year) and is Great Place to Work Certified.

            Overview

            The centralized Data and Analytics team at Recharge delivers critical analytic capabilities and insights for Recharge’s business and customers. 

            As a Senior Data Engineer, you will build scalable data pipelines and infrastructure that power internal business analytics and customer-facing data products.  Your work will empower data analysts to derive deeper strategic insights from our data, and will  enable developers to build applications that surface data insights directly to our merchants. 

            What you’ll do

            • Build data pipeline, ELT and infrastructure solutions to power internal data analytics/science and external, customer-facing data products.

            • Create automated monitoring, auditing and alerting processes that ensure data quality and consistency.

            • Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models

            • Design, develop, implement, and optimize existing ETL processes that merge data from disparate sources for consumption by data analysts, business owners, and customers

            • Seek ways to continually improve the operations, monitoring and performance of the data warehouse

            • Influence and communicate with all levels of stakeholders including analysts, developers, business users, and executives.

            • Live by and champion our values: #day-one, #ownership, #empathy, #humility.

            What you’ll bring

            • Typically, 5+ years experience in a data engineering related role (Data Engineer, Data Platform Engineer, Analytics Engineer etc) with a track record of building scalable data pipeline, transformation, and platform solutions. 

            • 3+ years of hands-on experience designing and building data pipelines and models  to ingesting, transforming and delivery of large amounts of data, from multiple sources into a Dimensional (Star Schema) Data Warehouse, Data Lake.

            • Experience with a variety of data warehouse, data lake and enterprise data management platforms (Snowflake {preferred}, Redshift, databricks, MySQL, Postgres, Oracle,  RDS, AWS, GCP)

            • Experience building data pipelines, models and infrastructure powering external, customer-facing (in addition to internal business facing) analytics applications.

            • Solid grasp to data warehousing methodologies like Kimball and Inmon

            • Experience working with a variety of ETL tools. (FiveTran, dbt, Python etc)

            • Experience with workflow orchestration management engines such as Airflow & Cloud Composer

            • Hands on experience with Data Infra tools like Kubernetes, Docker

            • Expert proficiency in SQL

            • Strong Python proficiency

            • Experience with ML Operations is a plus.

            Recharge | Instagram | Twitter | Facebook

            Recharge Payments is an equal opportunity employer. In addition to EEO being the law, it is a policy that is fully consistent with our principles. All qualified applicants will receive consideration for employment without regard to status as a protected veteran or a qualified individual with a disability, or other protected status such as race, religion, color, national origin, sex, sexual orientation, gender identity, genetic information, pregnancy or age. Recharge Payments prohibits any form of workplace harassment. 

            Transparency in Coverage

            This link leads to the Anthem Blue Cross machine-readable files that are made available in response to the federal Transparency in Coverage Rule and includes network negotiated rates for all items and services; allowed amounts for OON items, services and prescription drugs; and negotiated rates and historical prices for network prescription drugs (delayed). EIN 80-6245138. This link leads to the Kaiser machine-readable files.

            #LI-Remote

            See more jobs at ReCharge Payments

            Apply for this job

            29d

            Python Engineer, Data Group

            WoltStockholm, Sweden, Remote
            airflowkubernetespython

            Wolt is hiring a Remote Python Engineer, Data Group

            Job Description

            Data at Wolt

            As the scale of Wolt has rapidly grown, we are introducing new users to our data platform every day and want this to become a coherent and streamlined experience for all users, whether they’re Analysts, Data Scientists working with our data or teams bringing new data to the platform from their applications. We aim to both provide new platform capabilities across batch, streaming, orchestration and data integration to serve our user’s needs, as well as building an intuitive interface for them to solve their use cases without having to learn the details of the underlying tools.

            In the context of this role we are hiring an experienced Senior Software Engineer to provide technical leadership and individual contribution in one the following workstreams:

            Data Governance

            Wolt’s Data Group has already developed an initial foundational tooling in the areas of data management, security, auditing, data catalog and quality monitoring, but through your technical contributions you will ensure our Data Governance tooling is state of the art. You’ll be improving the current Data Governance platform, making sure it can be further integrated with the rest of the Data Platform and Wolt Services in a scalable, secure, compliant way, without significant disruptions to the teams. 

            Data Experience

            We want to ensure our Analysts, Data Scientists, and Engineers can discover, understand, and publish high-quality data at scale. We have recently released a new data platform tool which enables simple, yet powerful creation of workflows via a declarative interface. You will help us ensure our users succeed in their work with effective and polished user experiences by developing our internal user-facing tooling and curating our documentation to the highest standards. And what's best, you get to work closely with excited users to get continuous feedback about released features while supporting and onboarding them to new workflows.

            Data Lakehouse

            We recently started this workstream to manage data integration, organization, and maintenance of our new Iceberg based data lakehouse architecture. Together, we build and maintain ingestion pipelines to efficiently gather data from diverse sources, ensuring seamless data flow. We create and manage workflows to transform raw data into structured formats, guaranteeing data quality and accessibility for analytics and machine learning purposes.

            At the time you’ll join we’ll match you with one of these work streams based on our needs and your skills, experience and preferences.

            How we work

            Our teams have a lot of autonomy and ownership in how they work and solve their challenges. We value collaboration, learning from each other and helping each other out to achieve the team’s goals. We create an environment of trust, in which everyone’s ideas are heard and where we challenge each other to find the best solutions. We have empathy towards our users and other teams. Even though we’re working in a mostly remote environment these days, we stay connected and don’t forget to have fun together building great software!

            Our tech stack

            Our primary programming language of choice is Python. We deploy our systems in Kubernetes and AWS. We use Datadog for observability (logging and metrics). We have built our data warehouse on top of Snowflake and orchestrate our batch processes with Airflow and Dagster. We are heavy users of Kafka and Kafka Connect. Our CI/CD pipelines rely on GitHub actions and Argo Workflows.

            Qualifications

            The vast majority of our services, applications and data pipelines are written in Python, so several years of having shipped production quality software in high throughput environments written in Python is essential. You should be very comfortable with typing, dependency management, unit-, integration- and end-to-end tests. If you believe that software isn’t just a program running on a machine, but the solution to someone’s problem, you’re in the right place.

            Having previous experience in planning and executing complex projects that touch multiple teams/stakeholders and run across a whole organization is a big plus.Good communication and collaboration skills are essential, and you shouldn’t shy away from problems, but be able to discuss them in a constructive way with your team and the Wolt Product team at large.

            Familiarity with parts of our tech stack is definitely a plus, but we hire for attitude and ability to learn over knowing a specific technology that can be learned.

            The tools we are building inside of the data platform ultimately serve our many stakeholders across the whole company, whether they are Analysts, Data Scientists or engineers in other teams that produce or consume data. 

            We want all of our users to love the tools we’re building and that is why we want you to focus on building intuitive and user friendly applications that enable everyone to use and work with data at Wolt.

            See more jobs at Wolt

            Apply for this job

            +30d

            Senior Data Engineer - (GS)

            ITScoutLATAM, AR Remote
            agiletableauairflowsqlDesignmobileapipythonAWS

            ITScout is hiring a Remote Senior Data Engineer - (GS)

            ⚠️Only available for #residents of #Latinamerica⚠️


            Our client builds smart technology solutions through the combination of artificial intelligence, mobile, and web development for companies in the United States, Canada & Latam. It´s a technology company headquartered in Costa Rica. With operations throughout LATAM. Their core focus is building intelligent tech solutions to help our customers be more efficient in optimizing internal digital processes.

            About the job Senior Data Engineer

            How will you make an impact?

            • Build and maintain multiple data pipelines to ingest new data sources (API and file-based) and support products used by both external users and internal teams.
            • Optimize by building tools to evaluate and automatically monitor data quality, develop automated scheduling, testing, and distribution of feeds.
            • Work with our data science and product management teams to design, rapid prototype, and productize new data product ideas and capabilities.
            • Work with the data engineering team to design new data pipelines and optimize the existing data pipelines.
            • Conquer complex problems by finding new ways to solve with simple, efficient approaches with a focus on reliability, scalability, quality, and cost of our platforms.
            • Build processes supporting data transformation, data structures metadata, and workload management.
            • Collaborate with the team to perform root cause analysis and audit internal and external data and processes to help answer specific business questions.

            What will you bring to us?

            • 5+ years of professional Dimensional Data Warehousing/Data Modeling and Big Data Experience
            • Strong skills to write complex, highly-optimized SQL queries across large volumes of data
            • Experience working directly with data analytics to bridge business requirements with data engineering.
            • Experience with AWS infrastructure
            • Proven experience working with Snowflake and Airflow in a production environment
            • Strong SQL skills with experience in query optimization and performance tuning
            • Proficiency in Python programming language for data processing and automation tasks.
            • Experience with DBT (Data Build Tool) for data transformation and modeling
            • Excellent troubleshooting and problem-solving skills
            • Ability to operate in an agile, entrepreneurial start-up environment, and prioritize
            • Excellent communication and teamwork, and a passion for learning
            • Curiosity and passion for data, visualization, and solving problems
            • Willingness to question the validity, accuracy of data and assumptions

            Preferred Qualifications:

            • Experience with Redshift, Snowflake, or other MPP databases is a plus.
            • Knowledge for ETL/ELT tools like Informatica, IBM DataStage, or SaaS ETL tools is a plus.
            • Experience with Tableau or other reporting tools is a plus.


            See more jobs at ITScout

            Apply for this job

            +30d

            Senior Data Engineer (PySpark) - (GS)

            ITScoutLATAM, AR Remote
            Bachelor's degreeairflowsqlDesignmobilepythonAWS

            ITScout is hiring a Remote Senior Data Engineer (PySpark) - (GS)

            ⚠️Open position only for people residing in Costa Rica, México, Argentina, and Brazil.⚠️


            Our client builds smart technology solutions through the combination of artificial intelligence, mobile, and web development for companies in the United States, Canada & Latam. It´s a technology company headquartered in Costa Rica. With operations throughout LATAM. Their core focus is building intelligent tech solutions to help our customers be more efficient in optimizing internal digital processes.

            About the job Senior Data Engineer (PySpark)

            Senior Data Engineer

            We are seeking a skilled Data Engineer with expertise in Python, PySpark, and Apache Airflow. This role requires proficiency in AWS services such as Redshift and Databricks, as well as strong SQL skills for data manipulation and querying.

            Responsibilities:

            • Design, develop, and maintain scalable data pipelines and workflows using Python and PySpark.
            • Implement and optimize ETL processes within Apache Airflow for data ingestion, transformation, and loading.
            • Utilize AWS services such as Redshift and Databricks for data storage, processing, and analysis.
            • Write efficient SQL queries and optimize database performance for large-scale datasets.
            • Implement version control and continuous integration using GitLab CI for maintaining codebase integrity.
            • Follow Test-Driven Development (TDD) practices to ensure code reliability and maintainability.


            Requirements:

            • Bachelor's degree in Computer Science, Engineering, or a related field.
            • 4 to 7 years of professional experience in data engineering or a related role.
            • Proficiency in Python and PySpark for building data pipelines and processing large datasets.
            • Hands-on experience with Apache Airflow for orchestrating complex workflows and scheduling tasks.
            • Strong knowledge of AWS services, including Redshift and Databricks, for data storage and processing.
            • Advanced SQL skills for data manipulation, querying, and optimization.
            • Experience with version control systems like GitLab CI for managing codebase changes.
            • Familiarity with Test-Driven Development (TDD) practices and writing unit tests for data pipelines.
            • Excellent problem-solving skills and attention to detail.
            • Strong communication and collaboration skills to work effectively within a team environment.
            • Certification in AWS or related technologies is preferred but not required.

            **English: B2+ proficiency required.


            See more jobs at ITScout

            Apply for this job

            +30d

            Sr. Technical Program Manager, Data Platform

            agiletableaujiraterraformairflowsketchsqlRabbitMQgitc++AWS

            hims & hers is hiring a Remote Sr. Technical Program Manager, Data Platform

            Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

            Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

            About the job:

            We're looking for an energetic and experienced Senior Technical Program Managerto join our Data Platform Engineering team. Our team is responsible for enabling H&H business (Product, Analytics, Operations, Finance, Data Science, Machine Learning, Customer Experience, Engineering) by providing a platform with a rich set of data and tools to leverage. 

            As a senior TPM within the Data Platform team, you’ll work closely with our Product and Engineering teams to understand their roadmaps, architecture, and data. You’ll also work with our consumers of the data platform to better understand their data needs. You’ll take your passion for working with people and leveraging your technical skills to move quickly, efficiently, and decisively to connect the dots and help our team deliver.

            You Will:

            • Build strong cross-functional relationships to understand our product data and the needs/uses of data
            • Create requirements and technical specifications
            • Track and manage project risks, dependencies, status, and deliverable timelines
            • Help drive Data Platform strategies
            • Work with stakeholders to understand requirements and negotiate solutions and timelines
            • Communicate and make sure the right people have the correct information in time
            • Work within our Data Platform Engineering team to help build out ticketed work and provide details to translate requirements and benefits to that work
            • Ensure the highest business value is being delivered to our stakeholders
            • Establish mechanisms to optimize team effectiveness
            • Lead prioritization meetings and status meetings with stakeholders at a regular cadence
            • Collaborate with other technical program managers to highlight dependencies with different domains
            • You will have a bias for action, a sense of urgency, and attention to detail that makes you someone your team can instinctively trust and rally behind
            • Drive continuous process improvements and best practices to create a robust, predictable, priority-driven culture
            • Collaborate with legal to ensure data privacy and compliance are followed and implemented
            • Organize and facilitate daily stand-up, sprint planning, sprint retrospectives, and backlog grooming sessions

            You Have:

            • Minimum of 8+ years experience as a data-oriented Technical Program Manager, Technical Product Manager, or Lead capacity
            • Bachelor's degree in Computer Science, Engineering, or related field, or relevant years of work experience
            • Experience working with Data Platform Engineering teams to ship scalable data products and technical roadmaps
            • Previous experience building data platforms on the cloud using Databricks or Snowflake
            • Proficiency in Jira or other project management tools
            • Knowledge of modern data stacks like Airflow, Databricks, Google BigQuery, dbt, Fivetran
            • Ability to understand different data domains and technical requirements
            • Experience collaborating with different stakeholders such as Analytics, ML, Finance, Product, Marketing, Operations, and Customer Experience teams
            • Solid understanding of data pipelines
            • Knowledge of SQL to independently investigate and test datasets, perform data validation, sketch solutions, and create basic proofs of concept
            • Experience working with management to define and measure KPIs and other operating metrics
            • Extensive experience working on Data Security and Data Governance initiatives
            • A Foundational understanding of Amazon Web Services (AWS) or Google Cloud Platform (GCP)
            • Understanding of SDLC and Agile frameworks
            • Strong project management skills with attention to detail and experience in managing multiple projects and meeting ongoing and overlapping deadlines 
            • Bias towards over-communication
            • Team player, collaborative, and positive attitude
            • Strong leadership and communication skills
            • Excellent writing, oral, and presentation skills
            • Ability to influence without authority
            • Passion for operational excellence, attention to detail, and a demonstrated ability to deliver results in a fast-paced, high-growth environmen

            Nice To Have:

            • Experience working in healthcare 
            • Previous working experience at startups
            • A basic understanding of data streaming technologies (eg, Kafka, RabbitMQ), Git, Atlan, and Terraform is a big plus 
            • Working experience with BI tools like Tableau and Looker

            Our Benefits (there are more but here are some highlights):

            • Competitive salary & equity compensation for full-time roles
            • Unlimited PTO, company holidays, and quarterly mental health days
            • Comprehensive health benefits including medical, dental & vision, and parental leave
            • Employee Stock Purchase Program (ESPP)
            • Employee discounts on hims & hers & Apostrophe online products
            • 401k benefits with employer matching contribution
            • Offsite team retreats

            #LI-Remote

            Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

            The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

            Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors. We don’t ever want the pay range to act as a deterrent from you applying!

            An estimate of the current salary range for US-based employees is
            $150,000$185,000 USD

            We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

            Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

            Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

            For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

            See more jobs at hims & hers

            Apply for this job

            +30d

            Security Governance Partner - Machine Learning/Data Science, Cash App

            SquareSan Francisco, CA, Remote
            kotlintableauairflowsqlDesignjavakubernetespythonAWS

            Square is hiring a Remote Security Governance Partner - Machine Learning/Data Science, Cash App

            Job Description

            Cash Security is a security-enabling engineering organization focused on scaling security as a discipline through innovation. As a security team, Security Governance develops, implements and promotes frameworks and standards aimed at securing our customer’s data, giving privacy and security considerations a voice across the organization, and simplifying Cash’s security-related regulatory and compliance obligations.

            Governance Partners act as a bridge between the Cash App functional area they support and Leadership across security, engineering, and compliance teams to drive security enablement. The Security Governance Partner for Cash App Machine Learning and Data Science (ML/DS) works directly with the teams that build Cash App’s machine learning pipelines and infrastructure and the Cash data scientists to identify and communicate constraints and to evaluate potential solutions, while partnering closely with MLDS Security Engineering to communicate requirements and shape the security posture of the MLDS organization.

            You will:

            • Act as a security domain expert in partnership with Compliance, Legal, and Engineering
            • Collaborate deeply across roles and functions, with Security Engineering, Machine Learning Engineering and Modeling, and Data Science/Business Intelligence
            • Identify, prioritize and balance security efforts with other objectives
            • Help Cash automate governance and compliance functions and develop reusable tools for common tasks using scripting languages like Python or data tools like Prefect
            • Participate in technical design discussions, evaluate security properties of systems and services, drive risk decisions, and influence technical architecture
            • Understand challenges and roadblocks to achieving the desired security posture, and push requirements to Security Engineering to drive long-term change
            • Develop, implement and promote security standards and frameworks
            • Interpret and communicate security and compliance constraints to key stakeholders
            • Monitor applicable changes to security and privacy related laws, regulations, and industry standards, with an eye towards Generative AI and other forward-looking technologies

            Qualifications

            You have:

            • 3+ years of experience leading projects or programs in a security environment
            • 5+ years working in a security-focused role in a technology-heavy industry
            • Proficiency with at least one programming language (e.g. Python, Kotlin, Java)
            • Conceptual understanding of machine learning and data science tools and processes
            • Solid technical background with cloud computing architectures and security patterns
            • Ability to drive alignment and change in a matrixed-environment with minimal supervision
            • Boundless curiosity, persistence and a grounded approach to getting things done
            • Process orientation and an efficiency mindset to keep the organization unblocked and accountable to security
            • Working knowledge of one or more relevant compliance regulations such as SEC/FINRA, CCPA, CPRA, GDPR, PCI DSS, SOX

            Tech stack we use and teach:

            • Java and Kotlin 
            • Python
            • AWS, GCP, and Kubernetes
            • SQL, Snowflake
            • Apache Spark
            • DynamoDB, Kafka, Apache Beam and Google DataFlow
            • Tableau, Airflow, Looker, Mode, Prefect
            • Tecton
            • Jupyter notebooks 

            See more jobs at Square

            Apply for this job

            +30d

            Principal Data Engineer

            GeminiRemote (USA)
            remote-firstnosqlairflowsqlDesigncsspythonjavascript

            Gemini is hiring a Remote Principal Data Engineer

            About the Company

            Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

            Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

            At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

            In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

            The Department: Analytics

            The Role: Principal Data Engineer

            As a member of our data engineering team, you'll be setting standards for data engineering solutions that have organizational impact. You'll provide Architectural solutions that are efficient, robust, extensible and are competitive within business and industry context. You'll collaborate with senior data engineers and analysts, guiding them towards their career goals at Gemini. Communicating your insights with leaders across the organization is paramount to success.

            Responsibilities:

            • Focused on technical leadership, defining patterns and operational guidelines for their vertical(s)
            • Independently scopes, designs, and delivers solutions for large, complex challenges
            • Provides oversight, coaching and guidance through code and design reviews
            • Designs for scale and reliability with the future in mind. Can do critical R&D
            • Successfully plans and delivers complex, multi-team or system, long-term projects, including ones with external dependencies
            • Identifies problems that need to be solved and advocates for their prioritization
            • Owns one or more large, mission-critical systems at Gemini or multiple complex, team level projects, overseeing all aspects from design through implementation through operation
            • Collaborates with coworkers across the org to document and design how systems work and interact
            • Leads large initiatives across domains, even outside their core expertise. Coordinates large initiatives
            • Designs, architects and implements best-in-class Data Warehousing and reporting solutions
            • Builds real-time data and reporting solutions
            • Develops new systems and tools to enable the teams to consume and understand data more intuitively

            Minimum Qualifications:

            • 10+ years experience in data engineering with data warehouse technologies
            • 10+ years experience in custom ETL design, implementation and maintenance
            • 10+ years experience with schema design and dimensional data modeling
            • Experience building real-time data solutions and processes
            • Advanced skills with Python and SQL are a must
            • Experience and expertise in Databricks, Spark, Hadoop etc.
            • Experience with one or more MPP databases(Redshift, Bigquery, Snowflake, etc)
            • Experience with one or more ETL tools(Informatica, Pentaho, SSIS, Alooma, etc)
            • Strong computer science fundamentals including data structures and algorithms
            • Strong software engineering skills in any server side language, preferable Python
            • Experienced in working collaboratively across different teams and departments
            • Strong technical and business communication skills

            Preferred Qualifications:

            • Kafka, HDFS, Hive, Cloud computing, machine learning, LLMs, NLP & Web development experience is a plus
            • NoSQL experience a plus
            • Deep knowledge of Apache Airflow
            • Expert experience implementing complex, enterprise-wide data transformation and processing solutions
            • Experience with Continuous integration and deployment
            • Knowledge and experience of financial markets, banking or exchanges
            • Web development skills with HTML, CSS, or JavaScript
            It Pays to Work Here
             
            The compensation & benefits package for this role includes:
            • Competitive starting salary
            • A discretionary annual bonus
            • Long-term incentive in the form of a new hire equity grant
            • Comprehensive health plans
            • 401K with company matching
            • Paid Parental Leave
            • Flexible time off

            Salary Range: The base salary range for this role is between $172,000 - $215,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

            At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

            #LI-AH1

            Apply for this job

            +30d

            Java Solution Architect (Inside IR35 Contract)

            Version1London, United Kingdom, Remote
            airfloworacleDesignapijavapythonAWS

            Version1 is hiring a Remote Java Solution Architect (Inside IR35 Contract)

            Job Description

            Java Solution Architect

            MUST BE BASED WITHIN 50 MILES OF EDINBURGH, LONDON, BIRMINGHAM, MANCHESTER, NEWCASTLE, DUBLIN, OR BELFAST

            REMOTE BASED WITH VERY OCCASIONAL TRAVEL TO CLIENT SITES AND OFFICE.

            Would you like to the opportunity to expand your skillset across Java, Python, Spark, Hadoop, Trino & Airflow across the Banking & Financial Services industries?

            How about if you worked with an Innovation Partner of the Year Winner (2023 Oracle EMEA Partner Awards), Global Microsoft Modernising Applications Partner of the Year (2023) and AWS Collaboration Partner of the Year (2023) who would give you the opportunity to undertake accreditations and educational assistance for courses relevant to your role?

            Here at Version 1, we are currently in the market for experienced Java Solution Architect to join our growing Digital, Data & Cloud Practice.

            You will have the opportunity to work with the latest technology and worked on projects across a multiplicity of sectors and industries.

            Java Solution Architect

            Job Description

            You will be:

            • Leading the development of Java and Python development projects.
            • Designing and develop API integrations using Spark.
            • Collaborating with clients and internal teams to understand business requirements and translate them into HLD and LLD solutions.
            • Defining the architecture and technical design.
            • Designing data flows and integrations using Hadoop.
            • Working with the product team and testers to implement throughout testing.
            • Creating and developing comprehensive documentation, including solution architecture, design, and user guides.
            • Providing training and support to end-users and client teams.
            • Staying up to date with the latest trends and best practices, and share knowledge with the team.

            Qualifications

            You will have expertise within the following:

            • Java, Python, Spark, Hadoop (Essential)
            • Trino, Airflow (Desirable)
            • Architecture and capabilities.
            • Designing and implementing complex solutions with a focus on scalability and security.
            • Excellent communication and collaboration skills.

            Apply for this job

            +30d

            Senior Fullstack Engineer

            carsalesSydney, Australia, Remote
            terraformairflowvuetypescriptangularbackendfrontendNode.js

            carsales is hiring a Remote Senior Fullstack Engineer

            Job Description

            What you will do

            We're hiring a Senior Full-stack Engineer (frontend focus) who will join a team of talented developers, continuing to work with our wider Tech and Product teams and clients. 

            This is an exciting role, which will include:

            • You'll work in cross-functional full-stack team which prioritises software craftsmanship.
            • You'll have opportunities to work on all aspects of the product, including frontend (Vue.JS), backend (Nest.JS) , CI/CD (CircleCI), Cloud (Terraform + GCP), Data Engineering (Airflow, BigQuery, Apache Beam)
            • Your work will have a real impact on the business and our clients, including Weatherzone, the NRL, and OzBargain.

            Qualifications

            What you bring to the role

            • A good understanding of frontend engineering
            • An interest in User Experience, and a willingness to understand what the end-user is trying to achieve
            • Proficient with Typescript
            • Strong knowledge of Node.js REST APIs to read and understand the code, and make basic changes.
            • experience with modern frontend frameworks (Vue, React, Angular, etc) is essential.
            • Professional experience with Vue will be a significant advantage.

            See more jobs at carsales

            Apply for this job

            +30d

            Data Engineering Intern - Graduate

            TubiSan Francisco, CA; Remote
            terraformscalaairflowsqljavac++python

            Tubi is hiring a Remote Data Engineering Intern - Graduate

            Join Tubi (www.tubi.tv), Fox Corporation's premium ad-supported video-on-demand (AVOD) streaming service leading the charge in making entertainment accessible to all. With over 200,000 movies and television shows, including a growing library of Tubi Originals, 200+ local and live news and sports channels, and 455 entertainment partners featuring content from every major Hollywood studio, Tubi gives entertainment fans an easy way to discover new content that is available completely free. Tubi's library has something for every member of our diverse audience, and we're committed to building a workforce that reflects that diversity. We're looking for great people who are creative thinkers, self-motivators, and impact-makers looking to help shape the future of streaming.

            About the Role:

            At Tubi, data plays a vital role in keeping viewers engaged and the business thriving. Every day, data engineering pipelines analyze the massive amount of data generated by millions of viewers, turning it into actionable insights. In addition to processing TBs a day of 1st party user activity data, we manage a petabyte scale data lake and data warehouses that several hundred consumers use daily. We have two openings on two different teams.

            Core Data Engineering (1):In this role, you will join a team focused on Core Data Engineering, helping build and analyze business-critical datasets that fuel Tubi's success as a leading streaming platform.

            • Use SQL and SQL modeling to interact with and create massive sets of data
            • Use DBT and its semantic modeling concept to build production data models
            • Use Databricks as a data warehouse and computing platform
            • Use Python/Scala in notebooks to interact with and create large datasets

            Streaming Analytics (1):In this role you will join a small and nimble team focused on Streaming Analytics that power our core and critical datasets for machine learning, helping improve the data quality that fuels Tubi's success as a leading streaming platform.

            • Use SQL to explore and analyze the data quality of our most critical datasets, working with different technical stakeholders across ML & data science 
            • Work with engineers to implement a near-time data quality dashboard
            • Use Python/Scala in notebooks to transform and explore large datasets
            • Use tools like Airflow for workflow management and Terraform for cloud infrastructure automation

            Qualifications: 

            • Fluency (intermediate) in one major programming language (preferably Python, Scala, or Java) and SQL (any variant)
            • Familiar with big data technologies (e.g., Apache Spark, Kafka) is a plus
            • Strong communication skills and a desire to learn!

            Program Eligibility Requirements:

            • Must be actively enrolled in an accredited college or university and pursuing an undergraduate or graduate degree during the length of the program
            • Current class standing of sophomore (second-year college student) or above
            • Strong academic record (minimum cumulative 3.0 GPA)
            • Committed and available to work for the entire length of the program

            About the Program:

            • Application Deadline: April 19, 2024 
            • Program Timeline: 10-week placement beginning on6/17
            • Weekly Hours: Up to 40 hours per week (5 days)
            • Worksite:  Remote or Hybrid (SF or LA)

            Pursuant to state and local pay disclosure requirements, the pay range for this role, with the final offer amount dependent on education, skills, experience, and location, is listed per hour below.

            California, Colorado, New York City, Westchester County, NY, and Washington
            $40$40 USD

            Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits.  The following distinctions below outline the differences between the Tubi and FOX benefits:

            • For US-based non-exempt Tubi employees, the FOX Employee Benefits summary accurately captures the Vacation and Sick Time.
            • For all salaried/exempt employees, in lieu of the FOX Vacation policy, Tubi offers a Flexible Time off Policy to manage all personal matters.
            • For all full-time, regular employees, in lieu of FOX Paid Parental Leave, Tubi offers a generous Parental Leave Program, which allows parents twelve (12) weeks of paid bonding leave within the first year of the birth, adoption, surrogacy, or foster placement of a child. This time is 100% paid through a combination of any applicable state, city, and federal leaves and wage-replacement programs in addition to contributions made by Tubi.
            • For all full-time, regular employees, Tubi offers a monthly wellness reimbursement.

            Tubi is proud to be an equal opportunity employer and considers qualified applicants without regard to race, color, religion, sex, national origin, ancestry, age, genetic information, sexual orientation, gender identity, marital or family status, veteran status, medical condition, or disability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider employment for qualified applicants with arrest and conviction records. We are an E-Verify company.

            See more jobs at Tubi

            Apply for this job

            +30d

            Principal Data Engineer

            Procore TechnologiesBangalore, India, Remote
            scalanosqlairflowDesignazureUXjavadockerpostgresqlkubernetesjenkinspythonAWS

            Procore Technologies is hiring a Remote Principal Data Engineer

            Job Description

            We’re looking for a Principal Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

            As a Principal Data Engineer, you’ll use your expert-level technical skills to craft innovative solutions while influencing and mentoring other senior technical leaders. To be successful in this role, you’re passionate about distributed systems, including caching, streaming, and indexing technologies on the cloud, with a strong bias for action and outcomes. If you’re an inspirational leader comfortable translating vague problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

            This position reports to the Senior Manager, Reporting and Analytics. This position can be based in our Bangalore, Pune, office or work remotely from a India location. We’re looking for someone to join us immediately.

            What you’ll do: 

            • Design and build the next-generation data platform for the construction industry
            • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
            • Contribute to setting standards and development principles across multiple teams and the larger organization
            • Stay connected with other architectural initiatives and craft a data platform architecture that supports and drives our overall platform
            • Provide technical leadership to efforts around building a robust and scalable data pipeline to support billions of events
            • Help identify and propose solutions for technical and organizational gaps in our data pipeline by running proof of concepts and experiments working with Data Platform Engineers on implementation
            • Work alongside our Product, UX, and IT teams, leveraging your experience and expertise in the data space to influence our product roadmap, developing innovative solutions that add additional capabilities to our tools

            What we’re looking for: 

            • Bachelor’s degree in Computer Science, a similar technical field of study, or equivalent practical experience is required; MS or Ph.D. degree in Computer Science or a related field is preferred
            • 10+ years of experience building and operating cloud-based, highly available, and scalable online serving or streaming systems utilizing large, diverse data sets in production
            • Expertise with diverse data technologies like Databricks, PostgreSQL, GraphDB, NoSQL DB, Mongo, Cassandra, Elastic Search, Snowflake, etc.
            • Strength in the majority of commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Airflow, Kubernetes, Docker, Argo, Jenkins, or similar
            • Expertise with all aspects of data systems, including ETL, aggregation strategy, performance optimization, and technology trade-off
            • Understanding of data access patterns, streaming technology, data validation, data modeling, data performance, cost optimization
            • Experience defining data engineering/architecture best practices at a department and organizational level and establishing standards for operational excellence and code and data quality at a multi-project level
            • Strong passion for learning, always open to new technologies and ideas
            • AWS and Azure experience is preferred

            Qualifications

            See more jobs at Procore Technologies

            Apply for this job

            +30d

            Staff Data Engineer

            Procore TechnologiesBangalore, India, Remote
            scalaairflowsqlDesignUXjavakubernetespython

            Procore Technologies is hiring a Remote Staff Data Engineer

            Job Description

            We’re looking for a Staff Data Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next-generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers. 

            As a Staff Data Engineer, you’ll partner with other engineers and product managers across Product & Technology to develop data platform capabilities that enable the movement, transformation, and retrieval of data for use in analytics, machine learning, and service integration. To be successful in this role, you’re passionate about distributed systems including storage, streaming, and batch data processing technologies on the cloud, with a strong bias for action and outcomes. If you’re a seasoned data engineer comfortable and excited about building our next-generation data platform and translating problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!

            This is a full-time position and will report to our Senior Manager of Software Engineering and will be based in the India office, but employees can choose to work remotely. We are looking for someone to join our team immediately.

            What you’ll do: 

            • Participate in the design and implementation of our next-generation data platform for the construction industry
            • Define and implement operational and dimensional data models and transformation pipelines to support reporting and analytics
            • Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
            • Understand our current data models and infrastructure, proactively identify areas for improvement, and prescribe architectural recommendations with a focus on performance and accessibility. 
            • Work alongside our Product, UX, and IT teams, leveraging your expertise in the data space to influence our product roadmap, developing innovative solutions that add additional value to our platform
            • Help uplevel teammates by conducting code reviews, providing mentorship, pairing, and training opportunities
            • Stay up to date with the latest data technology trends

            What we’re looking for: 

            • Bachelor’s Degree in Computer Science or a related field is preferred, or comparable work experience 
            • 8+ years of experience building and operating cloud-based, highly available, and scalable data platforms and pipelines supporting vast amounts of data for reporting and analytics
            • 2+ years of experience building data warehouses in Snowflake or Redshift
            • Hands-on experience with MPP query engines like Snowflake, Presto, Dremio, and Spark SQL
            • Expertise in relational, dimensional data modeling.
            • Understanding of data access patterns, streaming technology, data validation, performance optimization, and cost optimization
            • Strength in commonly used data technologies and languages such as Python, Java or Scala, Kafka, Spark, Flink, Airflow, Kubernetes, or similar
            • Strong passion for learning, always open to new technologies and ideas

            Qualifications

            See more jobs at Procore Technologies

            Apply for this job

            +30d

            Software Engineer - Infrastructure Platforms

            CloudflareAustin or Remote US
            airflowpostgressqlDesignansibledockerpostgresqlmysqlkuberneteslinuxpython

            Cloudflare is hiring a Remote Software Engineer - Infrastructure Platforms

            About Us

            At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

            We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

            Available Locations: Remote - US, Remote - Mexico, Remote - Canada, Mexico City - Mexico, Ontario-  Canada.

            About the Role

            An engineering role at Cloudflare provides an opportunity to address some big challenges, at scale.  We believe that with our talented team, we can solve some of the biggest security, reliability and performance problems facing the Internet. Just how big?  

            • We have in excess of 15 Terabits of network transit capacity
            • We operate 250 Points-of-presence around the world
            • We serve more traffic than Twitter, Amazon, Apple, Instagram, Bing, & Wikipedia combined
            • Anytime we push code, it immediately affects over 200 million internet users
            • Every day, up to 20,000 new customers sign-up for Cloudflare service
            • Every week, the average Internet user touches us more than 500 times

            We are looking for talented Software Engineers to build and develop the platform which makes Cloudflare customers place their trust in us.  Our Software Engineers come from a variety of technical backgrounds and have built up their knowledge working in different environments. But the common factors across all of our reliability-focused engineers include a passion for automation, scalability, and operational excellence.  Our Infrastructure Engineering team focuses on the automation to scale our infrastructure.

            Our team is well-funded and focused on building an extraordinary company.  This is a superb opportunity to join a high-performing team and scale our high-growth network as Cloudflare’s business grows.  You will build tools to constantly improve our scale and speed of deployment.  You will nurture a passion for an “automate everything” approach that makes systems failure-resistant and ready-to-scale.   

            Infrastructure Platforms Software Engineers inside our Resiliency organization focus on building and maintaining the reliable and scalable underlying platforms that act as sources of truth and foundations for automation of Cloudflare’s hardware, network, and datacenter infrastructure. We interface with SRE, Network Engineering, Datacenter Engineering and other Infrastructure and Reliability teams to ensure their ongoing needs are met by the platforms we provide.

            Many of our Software Engineers have had the opportunity to work at multiple offices on interim and long-term project assignments. The ideal Software Engineering candidate has a passionate curiosity about how the Internet fundamentally works and has a strong knowledge of Linux and Hardware.  We require strong coding ability in Rust and Python. We prefer to hire experienced candidates; however raw skill trumps experience and we welcome strong junior applicants.

             

            Required Skills

            • Intermediate level software development skills in Rust and Python
            • Linux systems administration experience
            • 5 years of relevant software development experience
            • Strong skills in network services and Rest APIs
            • SQL databases (Postgres or MySQL)
            • Self-starter; able to work independently based on high-level requirements

             

            Examples of desirable skills, knowledge and experience

            • 5 years of relevant work experience
            • Prior experience working with Diesel and common database patterns in Rust
            • Configuration management systems such as Saltstack, Chef, Puppet or Ansible
            • Prior experience working with datacenter infrastructure automation at scale
            • Load balancing and reverse proxies such as Nginx, Varnish, HAProxy, Apache
            • The ability to understand service metrics and visualize them using Grafana and Prometheus
            • Key/Value stores (Redis, KeyDB, CouchBase, KyotoTycoon, Cassandra, LevelDB)

            Bonus Points

            • Experience with programming languages other than those listed in requirements.
            • Network fundamentals DHCP, subnetting, routing, firewalls, IPv6
            • Experience with continuous integration and deployment pipelines
            • Performance analysis and debugging with tools like perf, sar, strace, gdb, dtrace, strace
            • Experience developing systems that are highly available and redundant across regions
            • Experience with the Linux kernel and Linux software packaging
            • Internetworking and BGP

            Some tools that we use

            • Rust
            • Python
            • Diesel
            • Actix
            • Tokio
            • Apache Airflow 
            • Salt
            • Netbox
            • Docker
            • Kubernetes
            • Nginx
            • PostgreSQL
            • Redis
            • Prometheus

            Compensation

            Compensation may be adjusted depending on work location.

            • For Colorado-based hires: Estimated annual salary of $137,000 - $152,000
            • For New York City, Washington, and California (excluding Bay Area) based hires: Estimated annual salary of $154,000- $171,000.
            • For Bay Area-based hires: Estimated annual salary of $162,000 - $180,000

            Equity

            This role is eligible to participate in Cloudflare’s equity plan.

            Benefits

            Cloudflare offers a complete package of benefits and programs to support you and your family.  Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!  The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.

            Health & Welfare Benefits

            • Medical/Rx Insurance
            • Dental Insurance
            • Vision Insurance
            • Flexible Spending Accounts
            • Commuter Spending Accounts
            • Fertility & Family Forming Benefits
            • On-demand mental health support and Employee Assistance Program
            • Global Travel Medical Insurance

            Financial Benefits

            • Short and Long Term Disability Insurance
            • Life & Accident Insurance
            • 401(k) Retirement Savings Plan
            • Employee Stock Participation Plan

            Time Off

            • Flexible paid time off covering vacation and sick leave
            • Leave programs, including parental, pregnancy health, medical, and bereavement leave

             

            What Makes Cloudflare Special?

            We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

            Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

            Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

            Path Forward Partnership: Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.

            1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

            Sound like something you’d like to be a part of? We’d love to hear from you!

            This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

            Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

            Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

            See more jobs at Cloudflare

            Apply for this job

            +30d

            Senior Data Scientist - Support

            SquareSan Francisco, CA, Remote
            Bachelor degreetableauairflowsqlDesignpython

            Square is hiring a Remote Senior Data Scientist - Support

            Job Description

            The Cash App Support organization is growing and we are looking for a Data Scientist (DS) to join the team. The DS team at Cash derives valuable insights from our extremely unique datasets and turn those insights into actions that improve the experience for our customers every day. In this role, you’ll be embedded in our Support org and work closely with operations and other cross-functional partners to drive meaningful change for how our customers interact with the Support team and resolve issues with their accounts. 

            You will:

            • Partner directly with a Cash App customer support team, working closely with operations, engineers, and machine learning
            • Analyze large datasets using SQL and scripting languages to surface actionable insights and opportunities to the operations team and other key stakeholders
            • Approach problems from first principles, using a variety of statistical and mathematical modeling techniques to research and understand advocate and customer behavior
            • Design and analyze A/B experiments to evaluate the impact of changes we make to our operational processes and tools
            • Work with engineers to log new, useful data sources as we evolve processes, tooling, and features
            • Build, forecast, and report on metrics that drive strategy and facilitate decision making for key business initiatives
            • Write code to effectively process, cleanse, and combine data sources in unique and useful ways, often resulting in curated ETL datasets that are easily used by the broader team
            • Build and share data visualizations and self-serve dashboards for your partners
            • Effectively communicate your work with team leads and cross-functional stakeholders on a regular basis

            Qualifications

            You have:

            • An appreciation for the connection between your work and the experience it delivers to customers. Previous exposure to or interest in customer support problems would be great to have
            • A bachelor degree in statistics, data science, or similar STEM field with 5+ years of experience in a relevant role OR
            • A graduate degree in statistics, data science, or similar STEM field with 2+ years of experience in a relevant role
            • Advanced proficiency with SQL and data visualization tools (e.g. Looker, Tableau, etc)
            • Experience with scripting and data analysis programming languages, such as Python or R
            • Experience with cohort and funnel analyses, a deep understanding statistical concepts such as selection bias, probability distributions, and conditional probabilities
            • Experience in a high-growth tech environment

            Technologies we use and teach:

            • SQL, Snowflake, etc.
            • Python (Pandas, Numpy)
            • Looker, Mode, Tableau, Prefect, Airflow

            See more jobs at Square

            Apply for this job

            +30d

            Staff Data Scientist - Sales & Account Management

            SquareSan Francisco, CA, Remote
            Bachelor degreetableauairflowsqlDesignpython

            Square is hiring a Remote Staff Data Scientist - Sales & Account Management

            Job Description

            The Cash App Data Science (DS) organization is growing and we are looking for a Data Scientist to join the team, embedded within our Sales and Account Management domain. You will be responsible for deriving valuable insights from our extremely unique datasets as well as developing models, forecasts, analyses, reports to help achieve merchant acquisition, retention, growth and profitability goals.

            You will:

            • Partner directly with the Cash App Sales & AM team, working closely with operations, strategy, engineers, account executives/managers and leads
            • Analyze large datasets using SQL and scripting languages to surface actionable insights and opportunities to key stakeholders
            • Approach problems from first principles, using a variety of statistical and mathematical modeling techniques to research and understand merchant behavior
            • Design and analyze A/B experiments to evaluate the impact of changes we make to our operational processes and tools
            • Work with engineers to log new, useful data sources as we evolve processes, tooling, and features
            • Build, forecast, and report on metrics that drive strategy and facilitate decision making for key business initiatives
            • Write code to effectively process, cleanse, and combine data sources in unique and useful ways, often resulting in curated ETL datasets that are easily used by the broader team
            • Build and share data visualizations and self-serve dashboards for your partners
            • Effectively communicate your work with team leads and cross-functional stakeholders on a regular basis

            Qualifications

            You have:

            • An appreciation for the connection between your work and the experience it delivers to customers. Previous exposure to or interest in marketplace platforms specially on the merchant side, would be great to have.
            • A bachelor degree in statistics, data science, or similar STEM field with 8+ years of experience in a relevant role OR
            • A graduate degree in statistics, data science, or similar STEM field with 6+ years of experience in a relevant role
            • Advanced proficiency with SQL and data visualization tools (e.g. Looker, Tableau, etc)
            • Experience with scripting and data analysis programming languages, such as Python or R
            • Experience with cohort and funnel analyses, a deep understanding statistical concepts such as selection bias, probability distributions, and conditional probabilities

            Technologies we use and teach:

            • SQL, Snowflake, etc.
            • Python (Pandas, Numpy)
            • Looker, Mode, Tableau, Prefect, Airflow

            See more jobs at Square

            Apply for this job

            +30d

            Java Solution Architect

            Version1Málaga, Spain, Remote
            airfloworacleDesignapijavapythonAWS

            Version1 is hiring a Remote Java Solution Architect

            Job Description

            Java Solution Architect

            MUST BE BASED WITHIN 50 MILES OF EDINBURGH, LONDON, BIRMINGHAM, MANCHESTER, NEWCASTLE, DUBLIN, OR BELFAST

            REMOTE BASED WITH VERY OCCASIONAL TRAVEL TO CLIENT SITES AND OFFICE.

            Would you like to the opportunity to expand your skillset across Java, Python, Spark, Hadoop, Trino & Airflow across the Banking & Financial Services industries?

            How about if you worked with an Innovation Partner of the Year Winner (2023 Oracle EMEA Partner Awards), Global Microsoft Modernising Applications Partner of the Year (2023) and AWS Collaboration Partner of the Year (2023) who would give you the opportunity to undertake accreditations and educational assistance for courses relevant to your role?

            Here at Version 1, we are currently in the market for experienced Java Solution Architect to join our growing Digital, Data & Cloud Practice.

            You will have the opportunity to work with the latest technology and worked on projects across a multiplicity of sectors and industries.

            Java Solution Architect

            Job Description

            You will be:

            • Leading the development of Java and Python development projects.
            • Designing and develop API integrations using Spark.
            • Collaborating with clients and internal teams to understand business requirements and translate them into HLD and LLD solutions.
            • Defining the architecture and technical design.
            • Designing data flows and integrations using Hadoop.
            • Working with the product team and testers to implement throughout testing.
            • Creating and developing comprehensive documentation, including solution architecture, design, and user guides.
            • Providing training and support to end-users and client teams.
            • Staying up to date with the latest trends and best practices, and share knowledge with the team.

            Qualifications

            You will have expertise within the following:

            • Java, Python, Spark, Hadoop (Essential)
            • Trino, Airflow (Desirable)
            • Architecture and capabilities.
            • Designing and implementing complex solutions with a focus on scalability and security.
            • Excellent communication and collaboration skills.

            Apply for this job

            +30d

            (Senior) Python Engineer, Data Group

            WoltStockholm, Sweden, Remote
            airflowkubernetespython

            Wolt is hiring a Remote (Senior) Python Engineer, Data Group

            Job Description

            Data at Wolt

            As the scale of Wolt has rapidly grown, we are introducing new users to our data platform every day and want this to become a coherent and streamlined experience for all users, whether they’re Analysts, Data Scientists working with our data or teams bringing new data to the platform from their applications. We aim to both provide new platform capabilities across batch, streaming, orchestration and data integration to serve our user’s needs, as well as building an intuitive interface for them to solve their use cases without having to learn the details of the underlying tools.

            In the context of this role we are hiring an experienced Senior Software Engineer to provide technical leadership and individual contribution in one the following workstreams:

            Data Governance

            Wolt’s Data Group has already developed an initial foundational tooling in the areas of data management, security, auditing, data catalog and quality monitoring, but through your technical contributions you will ensure our Data Governance tooling is state of the art. You’ll be improving the current Data Governance platform, making sure it can be further integrated with the rest of the Data Platform and Wolt Services in a scalable, secure, compliant way, without significant disruptions to the teams. 

            Data Experience

            We want to ensure our Analysts, Data Scientists, and Engineers can discover, understand, and publish high-quality data at scale. We have recently released a new data platform tool which enables simple, yet powerful creation of workflows via a declarative interface. You will help us ensure our users succeed in their work with effective and polished user experiences by developing our internal user-facing tooling and curating our documentation to the highest standards. And what's best, you get to work closely with excited users to get continuous feedback about released features while supporting and onboarding them to new workflows.

            Data Lakehouse

            We recently started this workstream to manage data integration, organization, and maintenance of our new Iceberg based data lakehouse architecture. Together, we build and maintain ingestion pipelines to efficiently gather data from diverse sources, ensuring seamless data flow. We create and manage workflows to transform raw data into structured formats, guaranteeing data quality and accessibility for analytics and machine learning purposes.

            At the time you’ll join we’ll match you with one of these work streams based on our needs and your skills, experience and preferences.

            How we work

            Our teams have a lot of autonomy and ownership in how they work and solve their challenges. We value collaboration, learning from each other and helping each other out to achieve the team’s goals. We create an environment of trust, in which everyone’s ideas are heard and where we challenge each other to find the best solutions. We have empathy towards our users and other teams. Even though we’re working in a mostly remote environment these days, we stay connected and don’t forget to have fun together building great software!

            Our tech stack

            Our primary programming language of choice is Python. We deploy our systems in Kubernetes and AWS. We use Datadog for observability (logging and metrics). We have built our data warehouse on top of Snowflake and orchestrate our batch processes with Airflow and Dagster. We are heavy users of Kafka and Kafka Connect. Our CI/CD pipelines rely on GitHub actions and Argo Workflows.

            Qualifications

            The vast majority of our services, applications and data pipelines are written in Python, so several years of having shipped production quality software in high throughput environments written in Python is essential. You should be very comfortable with typing, dependency management, unit-, integration- and end-to-end tests. If you believe that software isn’t just a program running on a machine, but the solution to someone’s problem, you’re in the right place.

            Having previous experience in planning and executing complex projects that touch multiple teams/stakeholders and run across a whole organization is a big plus.Good communication and collaboration skills are essential, and you shouldn’t shy away from problems, but be able to discuss them in a constructive way with your team and the Wolt Product team at large.

            Familiarity with parts of our tech stack is definitely a plus, but we hire for attitude and ability to learn over knowing a specific technology that can be learned.

            The tools we are building inside of the data platform ultimately serve our many stakeholders across the whole company, whether they are Analysts, Data Scientists or engineers in other teams that produce or consume data. 

            We want all of our users to love the tools we’re building and that is why we want you to focus on building intuitive and user friendly applications that enable everyone to use and work with data at Wolt.

            See more jobs at Wolt

            Apply for this job

            +30d

            Data Engineer PySpark AWS

            2 years of experienceagileBachelor's degreejiraterraformscalaairflowpostgressqloracleDesignmongodbjavamysqljenkinspythonAWS

            FuseMachines is hiring a Remote Data Engineer PySpark AWS

            Data Engineer PySpark AWS - Fusemachines - Career PageSee more jobs at FuseMachines

            Apply for this job