Loading...
Loading...
Goldman Sachs
Design, build, and maintain scalable data pipelines and curated datasets within a modern Lakehouse and AI platform. Ensure data quality, reliability, and performance while collaborating with stakeholders to deliver robust data products.
Requires 7-12+ years of experience in data engineering with strong proficiency in Python or Java and SQL. Candidates must demonstrate expertise in distributed data processing, data modeling, and software engineering fundamentals.
The Opportunity
Join a team building the data foundations that support the firm’s AI and analytics capabilities. This role sits within the engineering effort to develop a modern Lakehouse and AI data platform that enables reliable, well-governed and high-performing data use across the firm.
At Goldman Sachs, engineering teams are positioned at the center of the business, building scalable systems, solving complex technical problems and turning data into action. In data engineering roles, the emphasis is on designing, building and maintaining large-scale data platforms, delivering production pipelines, improving reliability and quality, and partnering closely with users of the platform.
This is a delivery-focused role for engineers who want to build robust data assets in production, work with modern data technologies, and grow over time within the firm. You will contribute to the data models, pipelines and platform capabilities that underpin analytics, operational decision-making and emerging AI use cases.
Role Summary
As a Data Engineer, Lakehouse and AI Data Platform, you will design, build, test and support data pipelines and curated datasets on the firm’s modern data platform. You will work across ingestion, transformation, modelling, optimization and data quality, helping to deliver data products that are reliable, scalable and fit for purpose.
The role is suited to engineers who are comfortable writing code, working with SQL and distributed data processing, and solving practical delivery problems in a team environment. More experienced candidates may also contribute to technical design, platform standards and the shaping of delivery approaches across a wider set of use cases.
Key Responsibilities
Pipeline Engineering
Data Modelling and Curation
Data Quality and Reconciliation
Delivery and Partnership
Skills and Experience
Required
Data Engineering Capability
Technology Environment
The role will involve working with a modern and evolving data stack. Candidates are not expected to have deep expertise in every tool from day one but should bring relevant experience and the ability to work across comparable technologies.
Examples of technologies in scope include:
You will also work with internal data management and platform tooling, so a practical and adaptable engineering mindset is important.
What We Are Looking For
We are looking for engineers who can deliver well-structured, reliable solutions in production and who take ownership of the quality of what they build. The role suits candidates who are technically strong, pragmatic and comfortable working in a fast-paced environment where data platforms support important business outcomes.
Stronger candidates will typically demonstrate:
Generate a resume tailored to this job's requirements based on your uploaded resume.
AI Est. Total Comp
$200,000
Location
Dallas
Work Type
On-site
Seniority
manager
Experience
10+ years
Category
data eng
Quality Score
8.2