Data Engineer Remote Jobs

107 Results

+30d

Data Engineer (Remote)

Balsam BrandsMexico City, Mexico, Remote
nosqlpostgressqloracleDesignazuremysqlAWS

Balsam Brands is hiring a Remote Data Engineer (Remote)

Job Description

As Data Engineer, you will be responsible for designing and developing robust and scalable data warehousing solutions. The Data Engineer will be responsible for building data solutions based on the business requirements. Data solutions may involve retrieval, transformation, storage, and delivery of the data. The Data Engineer must follow standards and implement best practices while writing code and provide production support for the enterprise data warehouse. Our ideal candidate is a skillful data wrangler who enjoys building data solutions from the ground up and optimizing their performance.

This full-time position reports to the Manager of Data Engineering and will work remote in Mexico City. To ensure sufficient overlap with functional and cross-functional team members globally, some flexibility with this role's regular work schedule will be required. Most of our teams have overlap with early morning and/or early evening PST. Specific scheduling needs for this role will be discussed in the initial interview.

What you’ll do:

  • Be accountable for building and maintaining the data infrastructure for the organization
  • Collaborate with systems analysts and cross functional partners to understand data requirements
  • Champion data warehouse, create denormalized data foundation layer and normalized data marts
  • Define strategies to capture all data sources and impact of business process changes on data coming from those sources  
  • Work on all aspects of the data warehouse/BI environment including architecture, design, development, automation, caching and performance tuning
  • Build required infrastructure for optimal extraction, transformation and loading of data from various data sources on Cloud leveraging SQL, data cloud platforms like Snowflake
  • Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
  • Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues
  • Continually explore new technologies like Big Data, Artificial Intelligence, Generative AI, Machine Learning, and Predictive Data Modeling

What you bring to the table:

  • Must be fluent in English, both written and verbal
  • 5+ years of professional experience in the data engineering field
  • Hands-on polyglot programming expertise
  • Proficiency in Multi cloud platform like Azure, AWS and/or GCP
  • Experience in Azure Data Factory (ADF) or equivalent ETL tool
  • Extensive experience in designing, developing Snowflake Cloud Data Platform
  • Proficiency in designing and implementing data pipelines using diverse data sources including databases, APIs, external data providers, and streaming sources
  • Demonstrated history of designing efficient data models using Medallion Architecture
  • Deep understanding and experience with relational (SQL Server, Oracle, Postgres and MySQL) and NoSQL databases
  • Experience building and supporting REST APIs for both inbound and outbound data workflow
  • Proficiency and solid grasp of distributed system concepts to design scalable and fault tolerant data architectures
  • Excellent critical thinking to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions
  • Excellent analytic skills associated with working on structured and unstructured datasets.
  • Ability to build processes that support data transformation, workload management, data structures, dependency and metadata
  • Ability to build and optimize data sets, ‘big data’ data pipelines and architectures
  • Ability to understand and tell the story embedded in the data at the core of our business
  • Ability to communicate with non-technical audience from a variety of business functions
  • Strong knowledge of coding standards, best practices and data governance

Notes: This is a full-time (40 hours/week), indefinite position with benefits. Velocity Global is the Employer of Record for Balsam Brands' Mexico City location, and you will be employed and provided benefits under their payroll. Balsam Brands has partnered with Velocity Global to act as your Employer of Record to ensure your employment will comply with all local laws and regulations and you will receive an exceptional employment experience.

What we offer:

  • Competitive compensation; salary is reviewed yearly and may be adjusted as part of the normal compensation review process
  • Career development and growth opportunities; access to online learning solutions and annual stipend for continuous learning
  • Fully remote work and flexible schedule
  • Collaborate in a multicultural environment; learn and share best practices around the globe
  • Government mandated benefits (IMSS, INFONAVIT, 50% vacation premium)
  • Healthcare coverage provided for the employee and dependents
  • Life insurance provided for the employee
  • 13% employee savings fund, capped to the legal limit
  • Monthly grocery coupons
  • Monthly non-taxable amount for the electricity and internet services 
  • 20 days Christmas bonus
  • Paid Time Off: Official Mexican holidays and 12 vacation days (increases with years of service), plus additional wellness days available at start of employment 

Qualifications

See more jobs at Balsam Brands

Apply for this job

+30d

Senior Data Engineer

Linux FoundationAustin, TX, Remote
agile

Linux Foundation is hiring a Remote Senior Data Engineer

Job Description

 We are seeking a Senior Data Engineer with expertise in Snowflake, dbt, and Fivetran. This role requires a deep understanding of data lineage creation and the ability to translate complex business challenges into technical solutions.

Key Responsibilities:

  • Develop and maintain scalable data pipelines, ensuring data integrity and reliability.
  • Utilize Snowflake, dbt, and Fivetran for data transformation and movement.
  • Create data lineages and models to support business intelligence and data warehousing.
  • Translate complex business issues into technical tasks for team members.
  • Implement best practices in data modeling and ETL processes.
  • Collaborate with cross-functional teams for high-quality data solutions.
  • Define technical deliverables, conduct risk analysis, and estimate efforts for projects.
  • Review work delivered by other technical team members.

 

Qualifications

  • Proven experience in data engineering with Snowflake, dbt, and Fivetran.
  • Strong background in data modeling and pipeline construction.
  • Effective communication of technical concepts to non-technical stakeholders.
  • Project management and team leadership experience.
  • Bachelor’s degree in Computer Science Engineering or related field.

Additional Skills:

  • Reviewing PRs and providing constructive feedback.
  • Familiarity with product development lifecycle and Agile methodologies.
  • Excellent time management and prioritization skills.
  • Advanced skills in analytics engineering and model design.
  • Comprehensive understanding of various data architecture approaches.
  • Ability to communicate technical issues and value to non-technical stakeholders.

 

See more jobs at Linux Foundation

Apply for this job

+30d

Senior Data Engineer, Data Platform

WebflowU.S. Remote
remote-firstsqlc++python

Webflow is hiring a Remote Senior Data Engineer, Data Platform

At Webflow, our mission is to bring development superpowers to everyone. Webflow is the leading visual development platform for building powerful websites without writing code. By combining modern web development technologies into one platform, Webflow enables people to build websites visually, saving engineering time, while clean code seamlessly generates in the background. From independent designers and creative agencies to Fortune 500 companies, millions worldwide use Webflow to be more nimble, creative, and collaborative. It’s the web, made better. 

We’re looking for a Senior Data Engineer, Data Platform to help us build a scalable, reliable and performant platform for both real-time customer facing applications and internal business intelligence needs. The Data Platform team enables data-driven decision making through robust data infrastructure, pipelines and analytics systems that power our business. 

About the role 

  • Location: Remote-first (United States; BC & ON, Canada) 
  • Full-time 
  • Permanent
  • Exempt 
  • The cash compensation for this role is tailored to align with the cost of labor in different geographic markets. We've structured the base pay ranges for this role into zones for our geographic markets, and the specific base pay within the range will be determined by the candidate’s geographic location, job-related experience, knowledge, qualifications, and skills.
    • United States  (all figures cited below in USD and pertain to workers in the United States)
      • Zone A: $162,500 - $216,050
      • Zone B: $152,700 - $203,100
      • Zone C: $143,00 - $190,150 
    • Canada  (All figures cited below in CAD and pertain to workers in ON & BC, Canada)
      • CAD 184,600 - CAD 245,500
  • Please visit our Careers page for more information on which locations are included in each of our geographic pay zones. However, please confirm the zone for your specific location with your recruiter.
  • Reporting to the Senior EM 

As a Senior Data Engineer, Data Platform, you’ll … 

  • Develop event-driven data infrastructure on AWS.
  • Build data pipelines for ingesting, processing, and routing events using Kafka, Spark streaming and other technologies.
  • Build a data lakehouse architecture.
  • Create unified frameworks for stream, batch and real-time processing.
  • Develop data models, schemas and standards for event data.
  • Optimize data replication and loading across systems.
  • Optimize data storage and access patterns for fast querying.
  • Improve data reliability, discoverability and observability.
  • Improve our planning, development, and deployment processes to help you and your fellow team members.
  • Participate in all engineering activities including incident response, interviewing, designing and reviewing technical specifications, code review, and releasing new functionality.
  • Mentor, coach, and inspire a team of engineers of various levels.
  • Collaborate with software engineers, product managers, and data scientists in an autonomous, supportive team environment.
  • Effectively communicate team priorities and strategy to engineering and cross-functional leadership teams.

In addition to the responsibilities outlined above, at Webflow we will support you in identifying where your interests and development opportunities lie and we'll help you incorporate them into your role.

About you 

You’ll thrive as a Senior Data Engineer, Data Platform if you:

  • Have 5+ years experience building large scale data platforms.
  • Have hands-on experience with event-driven architecture and streaming data processing frameworks like Kafka, Spark, Flink.
  • Have experience building lakehouse architecture on cloud storage.
  • Have experience with storage layers like Hudi, Delta Lake and Iceberg.
  • Familiarity with infrastructure tooling such as Terraform/Pulumi and worked with Kubernetes.
  • Are experienced with SQL, Python, Java.
  • Are experienced with time-series databases like Clickhouse, InfluxDB.
  • Are experienced working with dbt and Snowflake, BigQuery, Redshift or other data warehouses.
  • Have familiarity with Kimball’s dimensional modeling techniques.
  • Take pride in taking ownership and driving projects to business impact
  • Excellent organization and communication skills, both verbal and written.

Even if you don’t meet 100% of the above qualifications, you should still seriously consider applying. Research shows that you may still be considered for a role if you meet just half of the requirements.

Our Core Behaviors:

  • Obsess over customer experience.We deeply understandwhatwe’re building andwhowe’re building for and serving. We define the leading edge of what’s possible in our industry and deliver the future for our customers.
  • Move with heartfelt urgency.We have a healthy relationship with impatience, channeling it thoughtfully to show up better and faster for our customers and for each other. Time is the most limited thing we have, and we make the most of every moment.
  • Say the hard thing with care.Our best work often comes from intelligent debate, critique, and even difficult conversations. We speak our minds and don’t sugarcoat things — and we do so with respect, maturity, and care.
  • Make your mark.We seek out new and unique ways to create meaningful impact, and we champion the same from our colleagues. We work as ateamto get the job done, and we go out of our way to celebrate and reward those going above and beyond for our customers and our teammates.

Benefits & wellness

  • Equity ownership (RSUs) in a growing, privately-owned company
  • 100% employer-paid healthcare, vision, and dental insurance coverage for employees and dependents (full-time employees working 30+ hours per week), as well as Health Savings Account/Health Reimbursement Account, dependent care Flexible Spending Account (US only), dependent on insurance plan selection where applicable in the respective country of employment; Employees may also have voluntary insurance options, such as life, disability, hospital protection, accident, and critical illness where applicable in the respective country of employment
  • 12 weeks of paid parental leave for both birthing and non-birthing caregivers, as well as an additional 6-8 weeks of pregnancy disability for birthing parents to be used before child bonding leave (where local requirements are more generous employees receive the greater benefit); Employees also have access to family planning care and reimbursement
  • Flexible PTO with a mandatory annual minimum of 10 days paid time off for all locations (where local requirements are more generous employees receive the greater benefit), and sabbatical program
  • Access to mental wellness and professional coaching, therapy, and Employee Assistance Program
  • Monthly stipends to support health and wellness, smart work, and professional growth
  • Professional career coaching, internal learning & development programs
  • 401k plan and pension schemes (in countries where statutorily required) financial wellness benefits, like CPA or financial advisor coverage
  • Discounted Pet Insurance offering (US only)
  • Commuter benefits for in-office employees

Temporary employees are not eligible for paid holiday time off, accrued paid time off, paid leaves of absence, or company-sponsored perks unless otherwise required by law.

Be you, with us

At Webflow, equality is a core tenet of our culture. We are an Equal Opportunity (EEO)/Veterans/Disabled Employer and are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law. Pursuant to the San Francisco Fair Chance Ordinance, Webflow will consider for employment qualified applicants with arrest and conviction records.

Stay connected

Not ready to apply, but want to be part of the Webflow community? Consider following our story on our Webflow Blog, LinkedIn, X (Twitter), and/or Glassdoor

Please note:

We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Upon interview scheduling, instructions for confidential accommodation requests will be administered.

To join Webflow, you'll need a valid right to work authorization depending on the country of employment.

If you are extended an offer, that offer may be contingent upon your successful completion of a background check, which will be conducted in accordance with applicable laws. We may obtain one or more background screening reports about you, solely for employment purposes.

For information about how Webflow processes your personal information, please reviewWebflow’s Applicant Privacy Notice

 

See more jobs at Webflow

Apply for this job

+30d

Data Engineer

agileBachelor's degree3 years of experiencesqlDesignazureapiAWS

Blavity Inc. is hiring a Remote Data Engineer

Data Engineer - Blavity Inc. - Career Page var DV_DEPUTY = ""; var DV_APP_ROOT = ""; var DV_ID = "gG8wjsFMVd"; var DV_SUBDOMAIN = ""; var DV_CUSTID = ""; var DV_USER_FIRSTNAME = ""; var DV_USER_LASTNAME = ""; var DV_USER_EMAIL = ""; var

See more jobs at Blavity Inc.

Apply for this job

+30d

Data Engineer (Mid-Level)

terraformsqlRabbitMQDesignmobileazurescrumqagitjavac++.netangularAWSfrontend

Signify Health is hiring a Remote Data Engineer (Mid-Level)

How will this role have an impact?

A Senior Software Engineer - Datadevelops systems to manage data flow throughout Signify Health’s infrastructure. This involves all elements of data engineering, such as ingestion, transformation, and distribution of data.

What will you do?

  • Communicate with business leaders to help translate requirements into functional specification
  • Develop broad understanding of business logic and functionality of current systems
  • Analyze and manipulate data by writing and running SQL queries
  • Analyze logs to identify and prevent potential issues from occurring
  • Deliver clean and functional code in accordance with business requirements
  • Consume data from any source, such a flat files, streaming systems, or RESTful APIs    
  • Interface with Electronic Health Records
  • Engineer scalable, reliable, and performant systems to manage data
  • Collaborate closely with other Engineers, QA, Scrum master, Product Manager in your team as well as across the organization
  • Build quality systems while expanding offerings to dependent teams
  • Comfortable in multiple roles, from Design and Development to Code Deployment to and monitoring and investigating in production systems.

Requirements

  • Bachelors in Computer Science or equivalent
  • Proven ability to complete projects in a timely manner while clearly measuring progress
  • Strong software engineering fundamentals (data structures, algorithms, async programming patterns, object-oriented design, parallel programming) 
  • Strong understanding and demonstrated experience with at least one popular programming language (.NET or Java) and SQL constructs.
  • Experience writing and maintaining frontend client applications, Angular preferred
  • Strong experience with revision control (Git)
  • Experience with cloud-based systems (Azure / AWS / GCP).
  • High level understanding of big data design (data lake, data mesh, data warehouse) and data normalization patterns
  • Demonstrated experience with Queuing technologies (Kafka / SNS / RabbitMQ etc
  • Demonstrated experience with Metrics, Logging, Monitoring and Alerting tools
  • Strong communication skills
  • Strong experience with use of RESTful APIs
  • High level understanding of HL7 V2.x / FHIR based interface messages.
  • High level understanding of system deployment tasks and technologies. (CI/CD Pipeline, K8s, Terraform)

The base salary hiring range for this position is $92,300 to $132,900. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits.
In addition to your compensation, enjoy the rewards of an organization that puts our heart into caring for our colleagues and our communities.  Eligible employees may enroll in a full range of medical, dental, and vision benefits, 401(k) retirement savings plan, and an Employee Stock Purchase Plan.  We also offer education assistance, free development courses, paid time off programs, paid holidays, a CVS store discount, and discount programs with participating partners.  

About Us:
Signify Health is helping build the healthcare system we all want to experience by transforming the home into the healthcare hub. We coordinate care holistically across individuals’ clinical, social, and behavioral needs so they can enjoy more healthy days at home. By building strong connections to primary care providers and community resources, we’re able to close critical care and social gaps, as well as manage risk for individuals who need help the most. This leads to better outcomes and a better experience for everyone involved.
Our high-performance networks are powered by more than 9,000 mobile doctors and nurses covering every county in the U.S., 3,500 healthcare providers and facilities in value-based arrangements, and hundreds of community-based organizations. Signify’s intelligent technology and decision-support services enable these resources to radically simplify care coordination for more than 1.5 million individuals each year while helping payers and providers more effectively implement value-based care programs.
To learn more about how we’re driving outcomes and making healthcare work better, please visit us at www.signifyhealth.com.

Diversity and Inclusion are core values at Signify Health, and fostering a workplace culture reflective of that is critical to our continued success as an organization.
We are committed to equal employment opportunities for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.

See more jobs at Signify Health

Apply for this job

+30d

Mid Level Data Engineer

agileBachelor's degree3 years of experiencejiraterraformscalapostgressqloracleDesignmongodbpytestazuremysqljenkinspython

FuseMachines is hiring a Remote Mid Level Data Engineer

Mid Level Data Engineer - Fusemachines - Career Page

See more jobs at FuseMachines

Apply for this job

+30d

Staff Data Engineer, Data Platform

WebflowU.S. Remote
remote-firstsqlDesignc++python

Webflow is hiring a Remote Staff Data Engineer, Data Platform

At Webflow, our mission is to bring development superpowers to everyone. Webflow is the leading visual development platform for building powerful websites without writing code. By combining modern web development technologies into one platform, Webflow enables people to build websites visually, saving engineering time, while clean code seamlessly generates in the background. From independent designers and creative agencies to Fortune 500 companies, millions worldwide use Webflow to be more nimble, creative, and collaborative. It’s the web, made better. 

We’re looking for a Staff Data Engineer, Data Platform to help us build a scalable, reliable and performant platform for both real-time customer facing applications and internal business intelligence needs. The Data Platform team enables data-driven decision making through robust data infrastructure, pipelines and analytics systems that power our business. 

About the role 

  • Location: Remote-first (United States; BC & ON, Canada) 
  • Full-time / part-time
  • Exempt status
  • The cash compensation for this role is tailored to align with the cost of labor in different U.S. geographic markets. The base pay for this role ranges from $168,600 in our lowest geographic market up to $237,600 in our highest geographic market. These figures are in $USD and apply to candidates in the United States. The specific base pay within the range will be determined by the candidate’s geographic location, job-related experience, knowledge, qualifications, and skills.
  • Reportingto the Senior Engineering Manager

As a Staff Data Engineer, Data Platform, you'll ...

  • Design and architect event-driven data infrastructure on AWS.
  • Build data pipelines for ingesting, processing, and routing events using Kafka, Spark streaming and other technologies.
  • Design and build a data lakehouse architecture.
  • Create unified frameworks for stream, batch and real-time processing.
  • Develop data models, schemas and standards for event data.
  • Optimize data replication and loading across systems.
  • Optimize data storage and access patterns for fast querying.
  • Improve data reliability, discoverability and observability.
  • Improve our planning, development, and deployment processes to help you and your fellow team members.
  • Participate in all engineering activities including incident response, interviewing, designing and reviewing technical specifications, code review, and releasing new functionality.
  • Mentor, coach, and inspire a team of engineers of various levels.
  • Drive cross-pillar collaboration with software engineers, product managers, and data scientists in an autonomous, supportive team environment.
  • Effectively communicate team priorities and strategy to engineering and cross-functional leadership teams.

In addition to the responsibilities outlined above, at Webflow we will support you in identifying where your interests and development opportunities lie and we'll help you incorporate them into your role.

About you 

You’ll thrive as a Staff Data Engineer, Data Platform if you …

  • Have 7-10+ years experience building large scale data platforms.
  • Have 2+ years experience in tech-leading teams, including helping scope and breakdown work.
  • Have hands-on experience with event-driven architecture and streaming data processing frameworks like Kafka, Spark, Flink.
  • Have experience building lakehouse architecture on cloud storage.
  • Have experience with storage layers like Hudi, Delta Lake and Iceberg.
  • Are well versed with infrastructure tooling such as Terraform/Pulumi and worked with Kubernetes.
  • Are experienced with SQL, Python, or Java.
  • Are experienced with time-series databases like Clickhouse, InfluxDB.
  • Are experienced working with dbt and Snowflake, BigQuery, Redshift or other data warehouses.
  • Have familiarity with Kimball’s dimensional modeling techniques.
  • Have experience tech-leading feature teams on customer-facing products.
  • Value testing and documentation equally as much as your code.
  • Are comfortable with ambiguity and scoping solutions with your teammates.
  • Have consistently communicated trade offs throughout a project to meet both technical and business requirements.
  • Enjoy high-visibility work and presenting to executive counterparts.
  • Get excited about encouraging and developing other engineers.

Even if you don’t meet 100% of the above qualifications, you should still seriously consider applying. Research shows that you may still be considered for a role if you meet just half of the requirements.

Our Core Behaviors:

  • Obsess over customer experience.We deeply understandwhatwe’re building andwhowe’re building for and serving. We define the leading edge of what’s possible in our industry and deliver the future for our customers.
  • Move with heartfelt urgency.We have a healthy relationship with impatience, channeling it thoughtfully to show up better and faster for our customers and for each other. Time is the most limited thing we have, and we make the most of every moment.
  • Say the hard thing with care.Our best work often comes from intelligent debate, critique, and even difficult conversations. We speak our minds and don’t sugarcoat things — and we do so with respect, maturity, and care.
  • Make your mark.We seek out new and unique ways to create meaningful impact, and we champion the same from our colleagues. We work as ateamto get the job done, and we go out of our way to celebrate and reward those going above and beyond for our customers and our teammates.

Benefits & wellness

  • Equity ownership (RSUs) in a growing, privately-owned company
  • 100% employer-paid healthcare, vision, and dental insurance coverage for employees and dependents (US; full-time Canadian workers working 30+ hours per week), as well as Health Savings Account/Health Reimbursement Account, dependent on insurance plan selection. Employees also have voluntary insurance options, such as life, disability, hospital protection, accident, and critical illness
  • 12 weeks of paid parental leave for both birthing and non-birthing caregivers, as well as an additional 6-8 weeks of pregnancy disability for birthing parents to be used before child bonding leave. Employees also have access to family planning care and reimbursement
  • Flexible PTO with an mandatory annual minimum of 10 days paid time off, and sabbatical program
  • Access to mental wellness coaching, therapy, and Employee Assistance Program
  • Monthly stipends to support health and wellness, as well as smart work, and annual stipends to support professional growth
  • Professional career coaching, internal learning & development programs
  • 401k plan and financial wellness benefits, like CPA or financial advisor coverage
  • Commuter benefits for in-office workers

Temporary employees are not eligible for paid holiday time off, accrued paid time off, paid leaves of absence, or company-sponsored perks.

Be you, with us

At Webflow, equality is a core tenet of our culture. We arecommittedto building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law.

Stay connected

Not ready to apply, but want to be part of the Webflow community? Consider following our story on our Webflow Blog, LinkedIn, Twitter, and/or Glassdoor. 

Please note:

To join Webflow, you'll need valid U.S. or Canadian work authorization depending on the country of employment.

If you are extended an offer, that offer may be contingent upon your successful completion of a background check, which will be conducted in accordance with applicable laws. We may obtain one or more background screening reports about you, solely for employment purposes.

Protecting your privacy and the security of your data is a longstanding top priority for Webflow. Please consult our Applicant Privacy Notice to know more about how we collect, use and transfer the personal data of our candidates.

 

 

See more jobs at Webflow

Apply for this job

+30d

Senior Data Engineer

EquipmentShareRemote; Chicago; Denver; Kansas City; Columbia MO
agileairflowsqlDesignc++postgresqlpythonAWS

EquipmentShare is hiring a Remote Senior Data Engineer

EquipmentShare is Hiring a Senior Data Engineer.

Your role in our team

At EquipmentShare, we believe it’s more than just a job. We invest in our people and encourage you to choose the best path for your career. It’s truly about you, your future, and where you want to go.

We are looking for a Senior Data Engineer to help us continue to build the next evolution of our data platform in a scalable, performant, and customer-centric architecture.

Our main tech stack includes Snowflake, Apache Airflow, AWS cloud infrastructure (e.g., Kinesis, Kubernetes/EKS, Lambda, Aurora RDS PostgreSQL), Python and Typescript.

What you'll be doing

We are typically organized into agile cross-functional teams composed of Engineering, Product, and Design, which allows us to develop deep expertise and rapidly deliver high-value features and functionality to our next-generation T3 Platform.

You’ll be part of a close-knit team of data engineers developing and maintaining a data platform built with automation and self-service in mind to support analytics and machine learning data products for the next generation of our T3 Fleet that enableend-users to track, monitor and manage the health of their connected vehicles and deployed assets. 

We'll be there to support you as you become familiar with our teams, product domains, tech stack and processes — generally how we all work together.

Primary responsibilities for a Senior Data Engineer

  • Collaborate with Product Managers, Designers, Engineers, Data Scientists and Data Analysts to take ideas from concept to production at scale.
  • Design, build and maintain our data platform to enable automation and self-service for data scientists, machine learning engineers and analysts.
  • Design, build and maintain data product framework to support EquipmentShare application data science and analytics features.
  • Design, build and maintain CI/CD pipelines and automated data and machine learning deployment processes.
  • Develop data monitoring and alerting capabilities.
  • Document architecture, processes and procedures for knowledge sharing and cross-team collaboration.
  • Mentor peers to help them build their skills.

Why We’re a Better Place to Work

We can promise that every day will be a little different with new ideas, challenges and rewards.

We’ve been growing as a team and we are not finished just yet— there is plenty of opportunity to shape how we deliver together.

Our missionis to enable the construction industry with tools that unlock substantial increases to productivity. Together with our team and customers, we are building the future of construction.

T3is the only cloud-based operating system that brings together construction workflows & data from constantly moving elements in one place.

  • Competitive base salary and market leading equity package.
  • Unlimited PTO.
  • Remote first.
  • True work/life balance.
  • Medical, Dental, Vision and Life Insurance coverage.
  • 401(k) + match.
  • Opportunities for career and professional development with conferences, events, seminars and continued education.
  • On-site fitness center at the Home Office in Columbia, Missouri, complete with weightlifting machines, cardio equipment, group fitness space, racquetball courts, a climbing wall, and much more!
  • Volunteering and local charity support that help you nurture and grow the communities you call home through our Giving Back initiative.
  • Stocked breakroom and full kitchen with breakfast and lunch provided daily by our chef and kitchen crew.

About You

You're a hands-on developer who enjoys solving complex problems and building impactful solutions.  Most importantly, you care about making a difference.

  • Take the initiative to own outcomes from start to finish — knowing what needs to be accomplished within your domain and how we work together to deliver the best solution.
  • You are passionate about developing your craft — you understand what it takes to build quality, robust and scalable solutions.
  • You’ll see the learning opportunity when things don’t quite go to plan — not only for you but for how we continuously improve as a team.
  • You take a hypothesis-driven approach — knowing how to source, create and leverage data to inform decision making, using data to drive how we improve, to shape how we evaluate and make platform recommendations.

So, what is important to us?

Above all, you’ll get stuff done. More importantly, you’ll collaborate to do the right things in the right wayto achieve the right outcomes.

  • 7+ years of relevant data platform development experience building production-grade solutions.
  • Proficient with SQL and a high-order object-oriented language (e.g., Python).
  • Experience with designing and building distributed data architecture.
  • Experience building and managing production-grade data pipelines using tools such as Airflow, dbt, DataHub, MLFlow
  • Experience building and managing production-grade data platforms using distributed systems such as Kafka, Spark, Flink and/or others.
  • Familiarity with event data streaming at scale.
  • Proven track record learning new technologies and applying that learning quickly.
  • Experience building observability and monitoring into data products. 
  • Motivated to identify opportunities for automation to reduce manual toil.

EquipmentShare is committed to a diverse and inclusive workplace. EquipmentShare is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

 

#LI-Remote

 

See more jobs at EquipmentShare

Apply for this job

+30d

Senior Data Engineer

EcoVadisWarsaw, Poland, Remote
sqlDesignazureqagit

EcoVadis is hiring a Remote Senior Data Engineer

Job Description

We are looking for a Senior Data Engineer to join our growing data platform team. We need a skilled engineer who will help us develop and maintain data architectures and processing solutions that power our data scientists, BI analytics and many other internal data consumers. One major role of the team is to prepare and maintain self-serve data infrastructure, and ensure data availability, security and scalability. Another one is to build and maintain pipelines for integrating internal and third-party data while ensuring high data integrity and quality. The engineer will work in a dynamic, fast-changing environment with plenty of interaction with other teams.

Responsibilities

  • Design, build, maintain and manage the data platform components;
  • Manage data related projects, coordinate work of involved data engineers;
  • Lead data modelling and data architecture activities;
  • Create, maintain and monitor pipelines for regular data processing while ensuring data reliability, quality and efficiency;
  • Maintain the data serving layer, manage permissions and automate processes around the data catalog;
  • Develop integrations with internal and external data sources;
  • Cross-functional cooperation with data scientists, data engineers, devops, architects, developers, QA and other parties. Collaborate to bring new features and services into production;
  • Develop and improve operational practices and procedures.

Qualifications

  • Experience inDatabricks- creating notebooks, developing integration code in Python/PySpark, working with delta tables (Delta Lakehouse);
  • Experience in managing projects, data related initiatives, designing data architecture and data modelling;
  • Expertise with SQL, database design/structures, ETL/ELT design patterns, and DataMart structures (star, snowflake schemas, etc.);
  • Experience in Azurestorage solution, particularly Azure Data Lake Storage and working with Parquet files and partitions;
  • Experience inAzure Data Factory (ADF) - creating pipelines and activities in ADF for full and incremental data loads into Azure Data Lake Store;
  • Experience with Azure DevOps / Git;
  • Azure Data Engineer certification (DP-203) or Databricks Data Engineer Associate / Professional  would be a plus;
  • Proficiency in English.

See more jobs at EcoVadis

Apply for this job

+30d

Data Engineer

Balsam BrandsMexico City, Mexico, Remote
nosqlpostgressqloracleDesignazuremysqlAWS

Balsam Brands is hiring a Remote Data Engineer

Job Description

About Us:Balsam Brands is a global, eCommerce retailer with roots in holiday and home décor. We strive for excellence in everything we do and present a unique opportunity for those seeking to have a meaningful impact in a people-first company that values relationship building, authenticity, and doing the right thing. We have steadily growing teams in Boise, the Bay Area, Dublin,and the Philippines.

The company's mission is to create joy together. We empower our team and partners to love what they do, provide products and experiences that inspire meaningful moments with family and friends, and give back to our families and communities in impactful ways. When you join Balsam Brands, you'll find a culture of caring people doing challenging work and building a welcoming workplace.

As Data Engineer, you will be responsible for designing and developing robust and scalable data warehousing solutions. The Data Engineer will be responsible for building data solutions based on the business requirements. Data solutions may involve retrieval, transformation, storage, and delivery of the data. The Data Engineer must follow standards and implement best practices while writing code and provide production support for the enterprise data warehouse. Our ideal candidate is a skillful data wrangler who enjoys building data solutions from the ground up and optimizing their performance.

This full-time position reports to the Manager of Data Engineering and will work remote in Mexico City. To ensure sufficient overlap with functional and cross-functional team members globally, some flexibility with this role's regular work schedule will be required. Most of our teams have overlap with early morning and/or early evening PST. Specific scheduling needs for this role will be discussed in the initial interview.

What you’ll do:

  • Be accountable for building and maintaining the data infrastructure for the organization
  • Collaborate with systems analysts and cross functional partners to understand data requirements
  • Champion data warehouse, create denormalized data foundation layer and normalized data marts
  • Define strategies to capture all data sources and impact of business process changes on data coming from those sources  
  • Work on all aspects of the data warehouse/BI environment including architecture, design, development, automation, caching and performance tuning
  • Build required infrastructure for optimal extraction, transformation and loading of data from various data sources on Cloud leveraging SQL, data cloud platforms like Snowflake
  • Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
  • Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues
  • Continually explore new technologies like Big Data, Artificial Intelligence, Generative AI, Machine Learning, and Predictive Data Modeling

What you bring to the table:

  • Must be fluent in English, both written and verbal
  • 5+ years of professional experience in the data engineering field
  • Hands-on polyglot programming expertise
  • Proficiency in Multi cloud platform like Azure, AWS and/or GCP
  • Experience in Azure Data Factory (ADF) or equivalent ETL tool
  • Extensive experience in designing, developing Snowflake Cloud Data Platform
  • Proficiency in designing and implementing data pipelines using diverse data sources including databases, APIs, external data providers, and streaming sources
  • Demonstrated history of designing efficient data models using Medallion Architecture
  • Deep understanding and experience with relational (SQL Server, Oracle, Postgres and MySQL) and NoSQL databases
  • Experience building and supporting REST APIs for both inbound and outbound data workflow
  • Proficiency and solid grasp of distributed system concepts to design scalable and fault tolerant data architectures
  • Excellent critical thinking to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions
  • Excellent analytic skills associated with working on structured and unstructured datasets.
  • Ability to build processes that support data transformation, workload management, data structures, dependency and metadata
  • Ability to build and optimize data sets, ‘big data’ data pipelines and architectures
  • Ability to understand and tell the story embedded in the data at the core of our business
  • Ability to communicate with non-technical audience from a variety of business functions
  • Strong knowledge of coding standards, best practices and data governance

Notes:  This is a full-time, permanent position with benefits. Velocity Global is the Employer of Record for Balsam Brands' Mexico City location, and you will be employed and provided benefits under their payroll. Balsam Brands has partnered with Velocity Global to act as your Employer of Record to ensure your employment will comply with all local laws and regulations and you will receive an exceptional employment experience.

Qualifications

See more jobs at Balsam Brands

Apply for this job

+30d

Data Engineer

BrushfireFort Worth, TX, Remote
Bachelor's degreetableausqlFirebaseazurec++typescriptkubernetespython

Brushfire is hiring a Remote Data Engineer

Job Description

The primary responsibility of this position is to manage our data warehouse and the pipelines that feed to/from that warehouse. This requires advanced knowledge of our systems, processes, data structures, and existing tooling. The secondary responsibility will be administering our production OLTP database to ensure it runs smoothly and using standard/best practices.

The ideal candidate for this position is someone who is extremely comfortable with the latest technology, trends, and favors concrete execution over abstract ideation. We are proudly impartial when it comes to solutions – we like to try new things and the best idea is always the winning idea, regardless of the way we’ve done it previously.

This is a full-time work-from-home position.

Qualifications

The following characteristics are necessary for a qualified applicant to be considered:

  • 3+ years of experience working with data warehouses (BigQuery preferred, Redshift, Snowflake, etc)

  • 3+ years of experience working with dbt, ETL (Fivetran, Airbyte, etc), and Reverse ETL (Census) solutions 

  • 3+ years of experience with Database Administration (Azure SQL, Query Profiler, Redgate, etc)

  • 1+ years of experience with BI/Visualization tools (Google Data/Looker Studio, Tableau, etc)

  • Experience with PubSub databases, specifically Google Cloud Firestore and Firebase Realtime Database

  • Experience with Github (or other similar version control systems) and CI/CD pipeline tools like Azure Devops and Github actions

  • Ability to communicate fluently, pleasantly, and effectively—both orally and in writing, in the English language—with customers and co-workers.

  • Concrete examples and evidence of work product and what role the applicant played in it

The following characteristics are not necessary but are highly desired:

  • Experience with Kubernetes, C#, TypeScript, Python

  • Bachelor's degree or higher in computer science or related technical field

  • Ability to contribute to strategic and planning sessions around architecture and implementation

See more jobs at Brushfire

Apply for this job

+30d

Data Engineer

KalderosRemote
terraformmobileslackazurec++dockerpython

Kalderos is hiring a Remote Data Engineer

About Our Organization

At Kalderos, we are building unifying technologies that bring transparency, trust, and equity to the entire healthcare community with a focus on pharmaceutical pricing.  Our success is measured when we can empower all of healthcare to focus more on improving the health of people. 

That success is driven by Kalderos’ greatest asset, our people. Our team thrives on the problems that we solve, is driven to innovate, and thrives on the feedback of their peers. Our team is passionate about what they do and we are looking for people to join our company and our mission.

That’s where you come in! 

What You'll Do

  • Work with product teams to understand and develop data models that can meet requirements and operationalize well
  • Collaborate with other Data Engineers to solve problems that directly impact enterprise customers
  • Build data transformations and data flows utilizing Python, DBT, and Snowflake
  • Build out automated ETL jobs that reliably process large amounts of data, and ensure these jobs run consistently and are well-monitored
  • Build tools that enable other data engineers to work more efficiently
  • Try out new data storage and processing technologies in proof of concepts and make recommendations to the broader team
  • Tune existing implementations to run more efficiently as they become bottlenecks, or migrate existing implementations to new paradigms as needed
  • Learn and apply knowledge about the drug discount space, and become a subject matter expert for internal teams to draw upon

What You'll Bring

  • Bachelor’s degree in computer science or similar field or equivalent experience
  • 2+ years of work experience as a Data Engineer in a professional full-time role
  • Experience with Python, Pandas, and DBT
  • Experience with cloud native infrastructure (Azure preferred)
  • Experience with Container Applications (Docker)
  • Experience building ETL pipelines and other services for the healthcare industry

Set Yourself Apart

  • You have a strong inclination to work in rapidly developing and expanding organizations and possess the necessary background to do so. You are well-acquainted with the fast-paced, high-volume, and uncertain nature of operations in the organization, and perceive it as a chance to deliver significant outcomes.
  • Professional experience in application programming with an object-oriented language. 
  • Experience with infrastructure as code tools like Terraform or Azure Resource Manager (ARM) templates

Expected Salary Range: $115,000 - $135,000 base + bonus

This position can be remote in the United States or hybrid in Chicago, IL or Boston, MA. Expected hours will be Eastern or Central time.

____________________________________________________________________________________________

Highlighted Company Perks and Benefits

  • Medical, Dental, and Vision benefits
  • 401k with company match
  • Flexible PTO with a 10 day minimum
  • Opportunity for growth
  • Mobile & Wifi Reimbursement
  • Commuter Reimbursement
  • Continuing education reimbursement
  • Donation matching for charitable contributions
  • Travel reimbursement for healthcare services not available near your home
  • New employee home office setup reimbursement

What It’s Like Working Here

  • We thrive on collaboration, because we believe that all voices matter and we can only put our best work into the world when we work together to solve problems.
  • We empower each other and believe in ensuring all voices are heard.
  • We know the importance of feedback in individual and organizational growth and development, which is why we've embedded it into our practice and culture. 
  • We’re curious and go deep. Our slack channel is filled throughout the day with insightful articles, discussions around our industry, healthcare, and our book club is always bursting with questions.

To learn more:https://www.kalderos.com/company/culture

We know that job postings can be intimidating, and research shows that while men apply to jobs when they meet an average of 60% of the criteria, women and other marginalized folks tend to only apply when they check every box. We encourage you to apply if you think you may be a fit and give us both a chance to find out!

Kalderos is proud to be an equal opportunity workplace.  We are committed to equal opportunity regardless of race, color, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or veteran status.

Kalderos participates in E-Verify.

See more jobs at Kalderos

Apply for this job

+30d

Staff Data Engineer (d/f/m)

Personio+3 more Munich, Remote Germany, London, Dublin, Remote Ireland, Remote UK, Berlin, Remote Berlin
DesignazurejavapythonAWS

Personio is hiring a Remote Staff Data Engineer (d/f/m)

The Role: How you'll make an impact at Personio
This position can be office-based or fully remote from one of the following countries: Germany, Ireland, or the UK. 

At Personio, your work transforms the way millions of people experience work every day. Join our Product & Technology team, where we drive our customers’ outcomes by designing, developing and delivering innovative and high quality products. Be empowered to take ownership of your areas and make an impact on your team, our product, and our customers. 

This role sits in our reporting and analytics function. You will take a lead role in building the latest version of a product that allows our customers to report and visualize their key employee/HR data.

Role Responsibilities: What you'll do
  • Data Pipeline Development: Design, develop, and maintain robust and scalable data pipelines to support the extraction, transformation, and loading (ETL) of data from diverse sources into our data warehouse.

  • Data Modeling: Create and manage data models, ensuring data integrity, consistency, and accuracy. Collaborate with stakeholders to understand data requirements and design appropriate data structures.

  • Performance Optimization: Identify and implement performance improvements for data processing and storage, working to optimize query performance and reduce latency.

  • Data Security and Governance: Implement and enforce data security measures and governance policies to protect sensitive information. Ensure compliance with data privacy regulations.

  • Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and software engineers, to understand their data needs and provide timely and accurate support.

  • Documentation: Create and maintain comprehensive documentation for data engineering processes, data models, and system architecture.

  • Continuous Learning: Stay abreast of industry trends, emerging technologies, and best practices in data engineering. Apply this knowledge to enhance and evolve our data infrastructure.
Role Requirements: What you need to succeed
  • Proven experience as a Staff or Principal Data Engineer, with a track record of designing and implementing data solutions.

  • Proficiency in programming languages such as Python, Java, or Scala.

  • Strong experience with data warehousing technologies (e.g., Snowflake, Redshift) and proficiency in SQL.

  • Experience of working in an event-driven architecture environment and cloud platforms (e.g, AWS, Azure, GCP).

  • Familiarity with data governance and security best practices.

  • Excellent problem-solving skills and the ability to work independently or collaboratively in a team.

Why Personio

Personio is an equal opportunities employer, committed to building an integrative culture where everyone feels welcomed and supported. We embrace uniqueness and understand that our diverse, values-driven culture makes us stronger. We are proud to have an inclusive workplace environment that will foster your development no matter your gender, civil status, family status, sexual orientation, religion, age, disability, education level, or race.

Aside from our people, culture, and mission, check out some of the other benefits that make Personio a great place to work:

  • Receive a competitive reward package – reevaluated each year – that includes salary, benefits, and pre-IPO equity

  • Enjoy 28 days of paid vacation, plus an additional day after 2 and 4 years (because we love what we do, but we also love vacation!)

  • Make an impact on the environment and society with 2 (fully paid) Impact Days –  one for an individual project of your choice and one for a company-wide initiative

  • Receive generous family leave, child support, mental health support, and sabbatical opportunities with PersonioCares

  • Find your best way to work with our office-led, remote-friendly PersonioFlex! Most teams offer a roughly 50% remote, 50% in-office working framework

  • Invest in your development with an annual personal development budget to use on professional memberships, external certifications, conferences, and more

  • Connect with your fellow Personios at regular company and team events like All Company Culture Week and local year-end celebrations

  • Engage in a high-impact working environment with flat hierarchies and short decision-making processes

About us
Bring your best. Make your mark. We’re using technology to revolutionize the way HR operates so that we can transform the way millions of people experience work every day. We move fast, challenge the status quo, and support our people as they shape their careers.

With over 10,000 customers and a team of 2,000 in eight offices around the world, now is the perfect time to join! We believe in hiring driven people who want to make an impact. So bring your best, and let’s build the future of HR technology together.

Discover our Personio Principles, that guide our mindset, behaviours, and the ways we work together:

Exceed Customer Expectations: We anticipate, prioritize, and solve for the needs of our customers.
Deliver Exceptional Results: We dream big and move with urgency to make great things happen.
Elevate One Another: We work together as trusted partners to amplify our collective impact.
Care to Challenge: We care personally and challenge directly to unlock our full potential.
Ignite Positive Momentum: We embrace the challenge with a positive mindset and celebrate our wins together.
 
 

See more jobs at Personio

Apply for this job

+30d

Data Engineer 3

agilescalaairflowsqloracleDesignazuregitc++mysqlpython

Blueprint Technologies is hiring a Remote Data Engineer 3

Who is Blueprint?

We are a technology solutions firm headquartered in Bellevue, Washington, with a strong presence across the United States. Unified by a shared passion for solving complicated problems, our people are our greatest asset. We use technology as a tool to bridge the gap between strategy and execution, powered by the knowledge, skills, and the expertise of our teams, who all have unique perspectives and years of experience across multiple industries. We’re bold, smart, agile, and fun.

What does Blueprint do?

Blueprint helps organizations unlock value from existing assets by leveraging cutting-edge technology to create additional revenue streams and new lines of business. We connect strategy, business solutions, products, and services to transform and grow companies.

Why Blueprint?

At Blueprint, we believe in the power of possibility and are passionate about bringing it to life. Whether you join our bustling product division, our multifaceted services team or you want to grow your career in human resources, your ability to make an impact is amplified when you join one of our teams. You’ll focus on solving unique business problems while gaining hands-on experience with the world’s best technology. We believe in unique perspectives and build teams of people with diverse skillsets and backgrounds. At Blueprint, you’ll have the opportunity to work with multiple clients and teams, such as data science and product development, all while learning, growing, and developing new solutions. We guarantee you won’t find a better place to work and thrive than at Blueprint.

What will I be doing?

Job Summary:

The Data Engineer’s responsibilities include designing, developing, and deploying Data Integration (ETL and or ELT) solutions using agreed upon design patterns and technologies; working with a large variety of data sources from json, csv, Oracle, SQL Server, Azure Synapse, Azure Analysis Services, Azure SQL DB, DataLake, polybase; and Streaming data sets.

 

Supervisory Responsibilities:

  • None

 

Duties/Responsibilities:

  • Create workflows, templates, and design patterns
  • Communicate with stakeholders to obtain accurate business requirements
  • Create and perform unit tests for solutions
  • Converting existing SSIS packages into Azure Data Factory Pipelines
  • Performs other related duties as assigned

 

Required Skills/Abilities:

•    Familiarity of SQL programming language basic fundamentals
•    Familiarity of Basic understanding use of Python or R or Scala
•    Awareness of Python/Synapse/Snowflake Distributed/Parallel Computing
•    Familiarity of Basic understanding of modeling tools such as ERWin, Dbeaver, Lucid, Visio 
•    Awareness of Git for version control of code repositories
•    Awareness of RDBMS Development tools. SQL Enterprise Manager, Visual Studio, Azure Data Studio.
•    Awareness of Open-source database platforms. MySQL
•    Awareness of Big Data frameworks such as Pyspark, Hadoop, etc
•    Familiarity of Modern Data Estate patterns: Source to Raw To Stage To Curated
•    Familiarity of Databricks concepts: batch, streaming, autoloader, etc.
•    Familiarity of Cloud diagnostics, logging, and performance monitoring/tuning.
•    Familiarity of Understanding of data shoveling tools: ADF, Fivetran, Airflow, etc..

•    Familiarity of Database i/o skills -- writing/reading structured and unstructured DBs
•    Awareness of Debugging, documentation, testing, and optimization skills
•    Awareness of Convert business needs into technical requirements
•    Experience Explaining DE concepts to business stakeholders
•    Experience Communicating and collaborating effectively with a remote team
•    Experience Communicating effectively with interdisciplinary teams of various technical skill levels
•    Learning Working effectively and delivering value in ambiguous settings

Education and Experience:

  • Bachelor’s degree in Computer Science, Industrial Engineering, Business Analytics, or equivalent.
  • 3+ years of broad-based IT experience with technical knowledge of data Integration (ETL, ELT) technologies and approaches, Data Warehousing, Data Lake Methodologies.
  • 3+ years’ experience with SQL Server. Expert level TSQL knowledge required.
  • 3+ years’ experience designing and implementing scalable ETL processes including data movement (SSIS, replication, etc.) and quality tools.
  • 2+ years’ experience building cloud hosted data systems. Azure preferred.
  • 2+ years’ experience with SQL Server Analysis Services (SSAS)
  • 2+ years’ experience with SQL Server Integration Services (SSIS)

 

Physical Requirements:

  • The employee is frequently required to sit at a workstation for extended and long periods of time. The employee will occasionally walk; will frequently use hands to finger, handle, grasp or feel; and reach with hands, wrists, or arms in repetitive motions.
  • The employee will frequently use fingers for manipulation of computers (laptop and desktops) and telephone equipment including continuous 10-key, handwriting, use of mouse (or alternative input device), use of keyboard (or alternative input device), or sporadic 10-Key, telephone, or telephonic headsets. This position will also frequently use other office productivity tools such as the printer/scanner.
  • Role requires the ability to lift, carry, push, pull and/or move up to 10lbs on a frequent basis and may require twisting actions of the upper body (i.e., Picking up, carrying a laptop – and twist to work on an L shape desk).
  • Specific vision abilities required by this job include close vision, distance vision, peripheral vision, depth perception, and ability to adjust focus. This position requires frequent use of a computer monitor and visual acuity to perform email responses, prepare and analyze data; transcribe; extensive reading and online communication.
  • Role requires being able to hear and use verbal communication for interactions with internal clients and dependent on role with external clients via conference calls.

 

Cognitive Ability Requirements: The employee must have the ability to:

  • Works with others (co-workers, professionals, public, customers, clients)
  • Works professionally in alignment with the organization’s code of conduct
  • Interact face to face with others (co-workers, superiors)
  • Constant verbal and email communication with others (co-workers, supervisors, vendors, client, customers etc.) to exchange information
  • Ability to take constructive feedback and show courtesy to co-workers, professionals, public, customers, clients
  • Make quick, accurate decisions without supervision
  • Evaluate or make decisions based on experience or knowledge
  • Divide attention between issues requiring multi-tasking
  • Use judgment on routine matters
  • Distinguish situations requiring judgment and adaptation of procedures from one task to another
  • Adapt to tightly scheduled and hurried pace of work activities
  • Meet frequent project deadlines
  • Organize own work
  • Ask questions or request assistance when needed
  • Follow instructions received both orally and in writing

Work Environment:

  • The work environment is usually a traditional office, indoor setting with no exposure to outside elements
  • This position requires no travel    
  • The employee will frequently be required to work closely with others and occasionally work alone
  • This position may require a work schedule across weekends and holidays
  • This position is subject to blackout dates which may include holidays where PTO is not approved
  • May work remotely based on adherence to the organizations work from home policy
  • Reasonable accommodations may be made to enable individuals with disabilities to perform the job

Salary Range

Pay ranges vary based on multiple factors including, without limitation, skill sets, education, responsibilities, experience, and geographical market. The pay range for this position reflects geographic based ranges for Washington state: $88,300 - $115,300 USD/annually. The salary/wage and job title for this opening will be based on the selected candidate’s qualifications and experience and may be outside this range.

Equal Opportunity Employer

Blueprint Technologies, LLC is an equal employment opportunity employer. Qualified applicants are considered without regard to race, color, age, disability, sex, gender identity or expression, orientation, veteran/military status, religion, national origin, ancestry, marital, or familial status, genetic information, citizenship, or any other status protected by law.

If you need assistance or a reasonable accommodation to complete the application process, please reach out to: recruiting@bpcs.com

Blueprint believe in the importance of a healthy and happy team, which is why our comprehensive benefits package includes:

  • Medical, dental, and vision coverage
  • Flexible Spending Account
  • 401k program
  • Competitive PTO offerings
  • Parental Leave
  • Opportunities for professional growth and development

 

Location: Remote - USA

See more jobs at Blueprint Technologies

Apply for this job

+30d

Data Engineer I

Pivot BioRemote
sqlDesignc++pythonAWS

Pivot Bio is hiring a Remote Data Engineer I

About Pivot Bio:  

At Pivot Bio, we are working together to transform agriculture, finding smarter, more sustainable and, ultimately, more profitable ways for farmers to grow. Working with and for farmers, we’re using cutting-edge science to create a microbial nitrogen for the world’s most vital crops. We are replacing synthetic fertilizers with a more sustainable, nature-driven plant nutrition that benefits farmers, consumers and the planet.

As a Data Engineer Iat Pivot Bio, you will play a key role inmaking data actionable at all levelsof the organizationas partofourData Operations/ Data Platformteamsupporting company data functions. This person willinteract witha variety of teams across the companyto create and managedata throughitslifecycle. 

Responsibilities: 

  • Develop tools and processes that increase efficiency, integrity, and accuracy throughout the data lifecycleby writing andmaintaininghigh-quality, robust,and well-documentedcode 
  • Develop, curate, andmaintaindata and data pipelines forproduct research and developmentefforts 
  • Design and implement normalized database and denormalized data warehouseschemas 
  • Collaboratewith cross-functional teams tofacilitatequality data acquisition practices whereneeded 
  • Participate incode reviews and provide constructive feedback topeers 
  • Handle support requests and debug issues 
  • Help make data accessible to non-technical stakeholders through the creation offront-endtools and visualization dashboards  
  • Interact with fit-for-purpose software applications used for lab information management or agricultural trialmanagement 

 Physical Demands: 

  • Job will involve mostly (95%) office work with occasional (5%) time spent in the field or in-person meetings to understand processes 
  • Repeating motions that may include the wrists, hands, or fingers 
  • Sedentary work that primarily involves sitting/standing 
  • Communicating with others to exchange information 

 Qualificationsand Experience: 

  • Bachelors degreeinSTEMwith 2+ years of relevant experience indata engineering,data science,data stewardship and research or production agricultureor an equivalent amount of relevant experience. 
  • Proficiencywith Python and SQL (R is a plus) withdemonstratedexperience (e.g.coursework orGithubportfolio),experience with REST APIs,and manipulating databases in a user-oriented environment 
  • Experience navigating data ecosystems (e.g.databases, data lakes) 
  • Familiarity with data management infrastructure such as Databricks, Snowflake,AWS,or other pipeline management tools  
  • Strong attention to detail and orientation to process 
  • Great communicationskills 
  • Travel up to 5% of the time for visits to field trial sites, customer sites  

 *Mustbe authorized towork in the United States* 

What we offer: 

  • Competitive package in a disruptive startup 
  • Stock options 
  • Health/Dental/Vision insurance with employer-paid premiums 
  • Life, Short-Term and Long-Term Disability policies 
  • Employee Assistance Program with free referrals and discounts 
  • 401(k) plan, 3% Match 
  • Commuter benefits 
  • Annual Training & Development support 
  • Flexible vacation policy with a generous holiday schedule 
  • Exciting opportunity to work with a talented and fun team

*Internal employees, please apply by clicking on the Internal Job Board icon on NSIDER

All remote positions and those not located in our Berkeley facility are paid based on National Benchmark data.  Following employment, growth beyond the hiring range is possible based on performance.

Hiring Compensation Range
$74,020$92,525 USD

See more jobs at Pivot Bio

Apply for this job

+30d

Data Engineer

Creative CFO (Pty) LtdCape Town, ZA Remote
nosqlpostgressqlDesignazureapiAWS

Creative CFO (Pty) Ltd is hiring a Remote Data Engineer

The position:

As a Data Engineer at Creative CFO, you will be at the forefront of architecting, building, and maintaining our data infrastructure, supporting data-driven decision-making processes.

We are dedicated to optimising data flow and collection to enhance our financial clarity services for high-growth businesses. Join our dynamic and rapidly expanding team, committed to building a world where more SMEs thrive.

To be successful in the role you will need to:

Build and optimise data systems:

    • Design, construct, install, test, and maintain highly scalable data management systems.
    • Ensure systems meet business requirements and industry practices.
    • Build high-performance algorithms, prototypes, predictive models, and proof of concepts.

    Expertly process data:

    • Develop batch processing and real-time data streaming capabilities.
    • Read, extract, transform, stage, and load data to selected tools and frameworks as required.

    Build data Integrations

    • Work closely with data analysts, data scientists, and other stakeholders to assist with data-related technical issues and support their data infrastructure needs.
    • Collaborate with data analytics and BI teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision-making across the organisation.

    Be versatile with technology

    • Exhibit proficiency in ETL tools such as Apache NiFi or Talend and a deep understanding of SQL and NoSQL databases, including Postgres and Cassandra.
      Demonstrate expertise in at least one cloud services platform like Microsoft Azure, Google Cloud Platform/Engine, or AWS.

    Focus on quality assurance

    • Implement systems for monitoring data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.

    Have a growth mindset

    • Stay informed of the latest developments in data engineering, and adopt best practices to continuously improve the robustness and performance of data processing and storage systems.


    Key skills & qualifications:

    • Bachelor’s degree in Statistics, Data Science, Computer Science, Information Technology, or Engineering.
    • Minimum of 2 years of professional experience in a Data Engineering role, with a proven track record of successful data infrastructure projects.
    • Proficiency in data analysis tools and languages such as SQL, R, and Python.
    • Strong knowledge of data modeling, data access, and data storage techniques.
    • Proficiency in at least one of Microsoft Azure, Google Cloud Platform/Engine, or AWS Lambda environments.
    • Familiarity with data visualisation tools such as PowerBI, Pyramid, and Google Looker Studio.
    • Excellent analytical and problem-solving skills.
    • Effective communication skills to convey technical findings to non-technical stakeholders.
    • Eagerness to learn and adapt to new technologies and methodologies

    Relevant experience required:

    • Previous roles might include Data Engineer, Data Systems Developer, or similar positions with a focus on building and maintaining scalable, reliable data infrastructures.
    • Strong experience in API development, integration, and management. Proficient in RESTful and SOAP services, with a solid understanding of API security best practices
    • Experience in a fast-paced, high-growth environment, demonstrating the ability to adapt to changing needs and technologies.

    Why Creative CFO?

    Vibrant Community

    • Be part of a close-knit, vibrant community that fosters creativity, wellness, and regular team-building events.
    • Celebrate individual contributions to team wins, fostering a culture of recognition.

    Innovative Leadership

    • Work under forward-thinking leadership shaping the future of data analytics.
    • Receive intentional mentorship for your professional and personal development.

    Education and Growth

    • Receive matched pay on education spend and extra leave days for ongoing education.
    • Enjoy a day's paid leave on your birthday - a celebration of you!

    Hybrid Work Setup

    • Experience the flexibility of a hybrid work setup, with currently one in-office day per month.
    • Choose to work in a great office space, if preferred.

    Professional and Personal Resources

    • Use the best technology, provided by the company
    • Benefit from Parental and Maternity Leave policies designed for our team members.

    See more jobs at Creative CFO (Pty) Ltd

    Apply for this job

    +30d

    DataOps Engineer

    DevoteamCité Mahrajène, Tunisia, Remote
    agileterraformDesignansiblegitdockerkuberneteslinuxjenkins

    Devoteam is hiring a Remote DataOps Engineer

    Description du poste

    Responsibilities:

    • Implement build automation within theCI/CD pipeline to ensure application security throughout the product lifecycle.
    • Collaborate with key stakeholders and leverage technical expertise across various stages of the SDLC.
    • Deliver high-quality code for modules, lead validation efforts for all testing types, and support implementation, transition, and warranty activities.
    • Contribute to the design, monitoring, maintenance, and optimization of infrastructure and CI/CD pipelines.
    • Design, develop, and maintain CI/CD processes hands-on.
    • Demonstrate proficiency in containerization technology such as Kubernetes.
    • Implement SDLC best practices and configuration management.
    • Utilize Infrastructure as Code (IaC) tools like Ansible, Chef, Terraform, Docker, and CloudFormation.
    • Create and manage detailed technical documentation and processes.
    • Provide support for existing systems and conduct automated root cause analysis.
    • Set up and configure data platforms like Informatica.

    Qualifications

    • 3-6 years of pertinent professional experience.
    • Experience in infrastructure management and monitoring.
    • Familiarity with Agile SDLC and the role of DevOps within it.
    • Proficiency in DevOps best practices, version control, and CI/CD.
    • Knowledge of DevOps platform tools such as Chef, Puppet, and Docker.
    • Experience with Continuous Integration tools including source control (GIT), build (Maven), and automation (Jenkins).
    • Proficiency in Linux command line, ssh (key management, port forwarding, debugging).
    • Strong communication skills.

    Certifications

    • ● Certified Kubernetes Administrator CKA
    • ● Certified Kubernetes Application Developer CKAD
    • ● Docker Certified Associate

    See more jobs at Devoteam

    Apply for this job

    +30d

    Sr. Data Engineer

    InMarketRemote - United States
    agilescalaairflowDesignmobilescrumc++pythonAWS

    InMarket is hiring a Remote Sr. Data Engineer

    Job Title:Senior Data Engineer

    Location: Remote - US Only

    About inMarket

    Since 2010, InMarket has been the leader in 360-degree consumer intelligence and real-time activation for thousands of today’s top brands. Through InMarket's data-driven marketing platform, brands can build targeted audiences, activate media in real time, and measure success in driving return on ad spend. InMarket's proprietary Moments offering outperforms traditional mobile advertising by 6x.* Our LCI attribution platform, which won the MarTech Breakthrough Award for Best Advertising Measurement Platform, was validated by Forrester to drive an average of $40 ROAS for our clients. 

    *Source: Wordstream US Google Display Benchmarks for Mobile Media

     

    About the Role

    Join one of the fastest growing teams at InMarket as a Senior Engineer to help us transform the advertising industry using the latest innovations in technologies such as Scala, Python, Spark, best practices like complete test coverage, and all the greatest offerings of the AWS and GCP ecosystems.  

    In this role you will have the opportunity to work with the Data Engineering team to ensure all of InMarket’s products get the data they need, and provide powerful metrics and insights for business leaders. You would also have the opportunity to learn, further develop, and become an expert in Spark, BigQuery, and other big data platforms. Haven’t used some of these before? We believe continuous learning is a cornerstone value of any talented engineer, and what better way to learn than by building high-quality products.

     

    Your Daily Impact as a Senior Data Engineer:

    • Work with closely with other Data Engineering team members
    • Participate in daily scrum
    • Work closely with product and engineering leads to scope and complete projects
    • Work on data pipeline features across AWS and GCP to ensure products get the data they need in a fault-tolerant and testable manner
    • Maintain clear metrics and test coverage of our pipelines
    • Take ownership of features and lead all life cycle stages for them including requirement analysis, design, development, testing and deployment
    • Work closely with Cloud and other engineering teams on a rotation basis to handle reported bugs/issues with the platform
    • Perform code reviews for your piers

    Your Expertise & Experience:

    • Bachelor’s Degree in Computer Science
    • 6+ years experience in software development
    • 3+ years experience in Spark, Scala, Python, Airflow
    • Experience testing data pipelines
    • Experience working with databases, distributed systems, concurrency
    • Experience working with Cloud IaaS (e.g. AWS, GCP)
    • Experience with Google Cloud BigQuery

    Nice to Have:

    • Experience with ML
    • Experience with BigQuery
    • Experience with Scala

    Benefits Summary

    • Competitive salary, stock options, flexible vacation
    • Medical, dental and Flexible Spending Account (FSA)
    • Company Matched 401(k)
    • Unlimited PTO (Within reason)
    • Talented co-workers and management
    • Agile Development Program (For continued learning/professional development)
    • Paid Paternity & Maternity Leave

     

    inMarket is an Equal Opportunity Employer (EOE). Qualified applicants are considered for employment without regard to age, race, color, religion, sex, national origin, sexual orientation, disability, or veteran status.

    For candidates in California, Colorado, and New York City, the Targeted Base Salary Range for this role is $135,000 to $168,480. 

    Actual salaries will vary depending on factors including but not limited to work experience, specialized skills and training, performance in role, business needs, and job requirements. Base salary is subject to change and may be modified in the future. Base salary is just one component of InMarket’s total rewards package that also may include bonus, equity, and benefits.  Ask your recruiter for more information!

    InMarket is an Equal Opportunity Employer (EOE). Qualified applicants are considered for employment without regard to age, race, color, religion, sex, national origin, sexual orientation, disability, or veteran status.

    Privacy Notice for California Job Applicants: https://inmarket.com/ca-notice-for-job-applicants/

    #LI-Remo

    See more jobs at InMarket

    Apply for this job

    +30d

    Data Engineer

    golangscalaairflowsqljavapython

    Chattermill is hiring a Remote Data Engineer

    Data Engineer

    ???? UK/EU (Hybrid or Remote)

    ???? £Competitive

     

    ⭐️ Our Perks ⭐️ 

    ❤️ Monthly Health & Wellness budget, increasing with length of service

    ???? Annual Learning and Development budget

    ????????‍♂️ Flexible working in a choice first environment - we trust the way you want to work

    ???? WFH Equipment (let us know what you need and we’ll get it for you!)

    ???? 25 Holiday Days + your local bank holidays, plus an extra day for every year of service

    ???? Your birthday off

    ???? Paid sick leave

    ???? Enhanced Family Leave - (UK Only)

    ⚕️ Optional healthcare plan

    ???? The ability to share in the company’s success through options

    ???? Perks including discounts on cinema tickets, utilities and more

    ???? Annual Chattermill summits plus regular socials throughout the year

    ???? If you’re in London, a dog friendly office with great classes, events, and a rooftop terrace

     

    ????‍♀️ The Role of Data Engineer

    As our Data Engineer, you'll play a crucial part in integrating 3rd party customer data into our cutting-edge platform. You'll get hands-on experience with Python, Airbyte, DBT, Dagster, BigQuery, and Kafka, making you a key player in our data orchestration processes.

    ???? What you'll be doing as Data Engineer:

    • Building integrations to import 3rd party customer data using Python and various data orchestration tools (Airbyte, DBT, Dagster, etc.).
    • Developing and maintaining data transformations and mappings, ensuring data integrity and efficiency.
    • Using Python to write and optimise data integration and transformation pipelines.
    • Working with BigQuery and Kafka to manage large-scale data.
    • Supporting and diagnosing issues within data pipelines to ensure seamless data flow.
    • Collaborating closely with team members on various data engineering tasks.

    ???? What you’ll need:

    • Data Expertise: A solid foundation in working with data is highly desirable, whether through a relevant degree, university-level projects, completion of a data-focused course, or equivalent work experience in a professional tech environment, such as a role as a data engineer, data analyst, or similar.
    • Coding Skills: Experience in Python and SQL coding is highly advantageous. Your ability to navigate and manipulate data through these languages will be an important part in this role.
    • Data Cleaning and Process Mapping: Demonstrated capability in cleaning and refining data, coupled with the ability to understand and map out processes.

    It'd be a bonus if you have:

    • Knowledge or experience in other programming languages like Java, Scala, Golang, Rust, or Ruby.
    • Knowledge or experience in data engineering tools like Airbyte, DBT, Dagster, BigQuery, Kafka, and similar technologies is a significant bonus.
    • Familiarity with similar alternative technologies such as Airflow, Snowflake, Databricks, Redshift, Athena, Apache Spark, Fivetran, Prefect, Pandas, Polars, Parquet, DuckDB is also acceptable.

     

    Chattermill - Who we are:

    Co-founded by Mikhail Dubov and Dmitry Isupov in 2015 while at Entrepreneur First, Chattermill was born out of their frustration that it took weeks, sometimes months, for customer research to yield any quality insights. Often, these would be out of date by the time they reached decision-makers. And it was also financially out of reach for most companies.

    When they started what eventually became Chattermill, they had a hunch that they could use the newly available tech of deep learning to help companies find insights amidst messy data. Their vision was to take what agencies and cutting-edge brands were doing by hand and automate it.

    Today, our Unified Customer Intelligence platform is used by the world’s best-loved customer-centric companies including Uber, HelloFresh, Wise, and more, all of whom can now see, and act on their customer reality.

    Our Mission, Vision, and Purpose 

    • Our mission is to empower teams to see their customers reality 
    • Our vision is to Analyse Over a Billion Pieces of Customer Feedback by 2027

     

    ???? Our Hiring Process

    1. Let’s introduce ourselves – you’ll have an introductory call with our Talent team - we’d love to learn more about you, your ambitions, and what you’re looking for in your next step. 
    2. Get to know your would-be team – you’ll have a call with your would-be manager, Dean Cornish (Engineering Manager - Data Platform), to learn more about the role and show off your experience 
    3. Show us what you are made of – you’ll complete a short task, which you’ll then run through on a call with the team 
    4. How our values and your career goals align – you’ll have a call with our cofounder to learn more about life at Chattermill and ensure we’re the right place for your next stage of growth

     

    ???? Our Values 

    ????‍♂️ We are obsessed with experience – We take our mission to rid the world of bad Customer Experience seriously, and we practice what we preach.

    ???? We believe in the power of of trust – Whether it's with each other, our customers, partners, or other stakeholders we always communicate with openness and trust.

    ???? We act as responsible owners – Whether it's about the company, a team, a project, or a task, having the freedom to make decisions in our area of responsibility is a crucial driver for us.

    ???? We share a passion for growth & progress – On every level, we’re motivated by taking on new challenges – even if they seem out of reach. We recognise that we are learning machines and we always seek to action feedback and improve collectively.

    ???? We set our ambitions high but stay humble – We've come together to build a product and a category that’s never been seen before. While we're an ambitious bunch with lofty goals, we don't approach this goal recklessly.

    ???? We believe the right team is the key to success – At Chattermill we’ve learned that all our important achievements have been the result of the right people collaborating together – that’s why we need you to apply today!

     

    ???? Diversity & Inclusion

    We want to enable exceptional experiences for everyone, and to achieve this we need everyone’s voice in our team.  We are on a mission to bring more diversity into the business in 2023 and to give everyone (from all backgrounds and abilities) a chance to join us, even if they may not fit all of the requirements set out in this job spec.

    We realise that some may be hesitant to apply for a role when they don’t meet 100% of the listed requirements – we believe in potential and will happily consider all applications based on the skills and experience you have, we’d love to be part of your growth and we encourage you to apply!

    We believe in removing unconscious biases from our recruitment process wherever possible.  As part of this effort, we ask that you do not include your photograph or personal details with your application.

     

     

     

    Key words: Data Engineer, Data Ops, Data Analyst, Coding, Python, SQL, Data Expert, Data Cleaning, Process Mapping, Data Engineering, Airbyte, DBT, Dagster, BigQuery, Kafka, Graduate, Data Ops Executive

    See more jobs at Chattermill

    Apply for this job

    +30d

    Senior Data Engineer - AWS (Remote)

    Fannie MaeReston, VA, Remote
    agileBachelor degreesqloraclemongodbuiscrummysqlpythonAWS

    Fannie Mae is hiring a Remote Senior Data Engineer - AWS (Remote)

    Job Description

    As a valued colleague on our team, you will contribute to developing data infrastructure and pipelines to capture, integrate, organize, and centralize data while testing and ensuring the data is readily accessible and in a usable state, including quality assurance.

    THE IMPACT YOU WILL MAKE
    The Senior Data Engineer role will offer you the flexibility to make each day your own, while working alongside people who care so that you can deliver on the following responsibilities:

    • Identify customer needs and intended use of requested data in the development of database requirements and support the planning and engineering of enterprise databases.
    • Maintain comprehensive knowledge of database technologies, complex coding languages, and computer system skills.
    • Support the integration of data into readily available formats while maintaining existing structures and govern their use according to business requirements.
    • Analyze new data sources and monitor the performance, scalability, and security of data.
    • Create an initial analysis and deliver the user interface (UI) to the customer to enable further analysis.

     

    Qualifications

    THE EXPERIENCE YOU BRING TO THE TEAM

    Minimum Required Experiences:

    • 2+ years with Big Data Hadoop cluster (HDFS, Yarn, Hive, MapReduce frameworks), Spark, AWS EMR
    • 2+ years of recent experience with building and deploying applications in AWS (S3, Hive, Glue, AWS Batch, Dynamo DB, Redshift, AWS EMR,  Cloudwatch, RDS, Lambda, SNS, SWS etc.)
    • 2+ years of Python, SQL, SparkSQL, PySpark
    • Excellent problem-solving skills and strong verbal & written communication skills
    • Ability to work independently as well as part of an agile team (Scrum / Kanban)


    Desired Experiences:

    • Bachelor degree or equivalent
    • Knowledge of Spark streaming technologies 
    • Experience in working with agile development teams
    • Familiarity with Hadoop / Spark information architecture, Data Modeling, Machine Learning (ML)
    • Knowledge of Environmental, Social, and Corporate Governance (ESG)

    Skills

    • Skilled in cloud technologies and cloud computing
    • Programming including coding, debugging, and using relevant programming languages
    • Experience in the process of analyzing data to identify trends or relationships to inform conclusions about the data
    • Skilled in creating and managing databases with the use of relevant software such as MySQL, Hadoop, or MongoDB
    • Skilled in discovering patterns in large data sets with the use of relevant software such as Oracle Data Mining or Informatica
    • Experience using software and computer systems' architectural principles to integrate enterprise computer applications such as xMatters, AWS Application Integration, or WebSphere
    • Working with people with different functional expertise respectfully and cooperatively to work toward a common goal
    • Communication including communicating in writing or verbally, copywriting, planning and distributing communication, etc.

    Tools

    • Skilled in AWS Analytics such as Athena, EMR, or Glue
    • Skilled in AWS Database products such as Neptune, RDS, Redshift, or Aurora
    • Skilled in SQL
    • Skilled in AWS Compute such as EC2, Lambda, Beanstalk, or ECS
    • Skilled in Amazon Web Services (AWS) offerings, development, and networking platforms
    • Skilled in AWS Management and Governance suite of products such as CloudTrail, CloudWatch, or Systems Manager
    • Skilled in Python object-oriented programming

    See more jobs at Fannie Mae

    Apply for this job