About the Role
You will deploy purpose-built vision models on Apple devices while developing innovative techniques to optimize their performance, efficiency, and scalability. You will also collaborate with hardware, software, and AI teams to integrate machine learning components into production systems.
Requirements
Candidates must have a BS degree and at least 10 years of relevant industry experience. Strong proficiency in Python, C++, model compression techniques, and machine learning model development lifecycles is required.
Full Job Description
We’re starting to see the incredible potential of multimodal foundation and large language models, and many applications in the computer vision and machine learning domain that previously appeared infeasible are now within reach. We are looking for a highly motivated and skilled Machine Learning Integration Engineer to join our team in the Video Computer Vision group and help us ship cutting edge computer vision technology on Apple devices. The Video Computer Vision org has pioneered features such as FaceID, FaceKit, and Gaze and Hand gesture control which have changed the way millions of users interact with their devices. We balance research and product requirements to deliver Apple quality, pioneering experiences, innovating through the full stack, and partnering with HW, SW and AI teams to shape Apple's products and bring our vision to life.
Description
As part of the Video Computer Vision (VCV) team, you will deploy purpose-built vision models on Apple devices, developing innovative techniques to optimize their performance, efficiency, and scalability on-device.
Minimum Qualifications
BS and a minimum of 10 years relevant industry experience.
Strong knowledge of model compression techniques such as pruning, distillation, quantization and weight clustering.
Solid understanding of operating system and extensive programming experience in Python and C++.
Experience working with PyTorch.
Experience with machine learning model development lifecycle, including data preprocessing, model training, evaluation, and deployment.
Foundational understanding of machine learning: MultiModal LLMs and integration of ML components into production systems.
Preferred Qualifications
Experience with CoreFoundation, RealityKit and CoreML frameworks.
Fundamental knowledge of real-time video pipelines, image transformations, and rendering loops.
Programming experience with Swift.