AI and ML at Facebook is being used at ultra large scale. Facebook’s AI Usage has grown exponentially over the last 2 years and is accelerating. Almost any dimension such as model size, data size, feature size, number of unique users, number of models, etc is exhibiting the same exponential growth rates. This growth pushes Facebook’s AI infrastructure to scale not only compute for inference and training, and also for a critical component of data and feature processing and engineering. Diverse business applications for AI results in a large amount of experimentation, thus driving the cost and importance of AI engineer developer efficiency.
Creating pipelines which are affordable and efficient while improving developer efficiency is critical to enabling the sustained scaling of AI at Facebook. Facilitating exabytes of training and features along with low latency inferencing is changing the nature of data processing systems. In this talk, we will outline the challenges of growth and scale and how these challenges are changing how we build our data and processing systems.
11:00AM - Day 2