Apple has unveiled MLX, a machine learning (ML) framework tailored for Apple Silicon computers, aiming to simplify the training and execution of ML models on devices powered by Apple’s M1, M2, and M3 series chips. The newly open-sourced framework features a unified memory model, providing a C++ API and a Python API closely aligned with NumPy, the Python library for scientific computing. Apple asserts that MLX enables users to train and run models directly on their Apple Silicon devices, eliminating the need for a translator to convert and optimize models using CoreML.
MLX incorporates a design inspired by popular frameworks such as ArrayFire, Jax, NumPy, and PyTorch. Notably, MLX arrays reside in shared memory, allowing operations on them to be performed on various device types, including the CPU and GPU, without the necessity of creating data copies. The framework’s unified memory model enhances efficiency and streamlines ML tasks on Apple Silicon devices.
In practical examples, Apple demonstrated MLX’s capabilities, showcasing tasks like image generation using Stable Diffusion on Apple Silicon hardware. The company claims that MLX outperforms PyTorch in certain scenarios, particularly in generating batches of images. According to Apple, when producing a batch of images, MLX achieves higher throughput compared to PyTorch for batch sizes of 6, 8, 12, and 16, with up to a 40% increase in speed.
The tests were conducted on a Mac equipped with an M2 Ultra chip, Apple’s fastest processor to date. MLX demonstrated its efficiency by generating 16 images in 90 seconds, outpacing PyTorch, which would take approximately 120 seconds to perform the same task.
Apple’s MLX framework also supports various ML applications, such as text generation using Meta’s open-source LLaMA language model and the Mistral large language model. Additionally, AI and ML researchers can leverage OpenAI’s Whisper tool, an open-source speech recognition model, to run models on their computers using MLX.
The release of MLX aligns with Apple’s commitment to facilitating ML research and development on its hardware. The framework provides a platform for developers to create efficient on-device ML features, potentially enhancing apps and services that leverage machine learning capabilities while running seamlessly on users’ Apple Silicon devices. With MLX, Apple aims to empower the ML community and foster advancements in the field by offering a versatile and efficient framework tailored for its hardware ecosystem.