AiPhreaks ← Back to News Feed

Safetensors is Joining the PyTorch Foundation

By Jakub Antkiewicz

2026-04-09T09:08:17Z

Safetensors, the secure format for storing and sharing AI model weights, has officially joined the PyTorch Foundation as a foundation-hosted project. The move transfers governance of the widely adopted open-source tool from its creator, Hugging Face, to a vendor-neutral home under the Linux Foundation. This shift is significant as it places a critical piece of the open-source ML infrastructure under community control, ensuring its development aligns with the broad ecosystem of companies and researchers who rely on it for safe model distribution.

Originally developed by Hugging Face to mitigate security risks associated with pickle-based formats that could execute arbitrary code, Safetensors became the de facto standard on platforms like the Hugging Face Hub. Its design prioritizes simplicity and performance, featuring a JSON header and raw tensor data that allows for zero-copy loading directly from disk. While Hugging Face’s core maintainers will continue to lead day-to-day development as part of the Technical Steering Committee, the project's repository, trademark, and formal governance now reside with the Linux Foundation, with a documented path for any community member to become a maintainer.

For the majority of users, this transition introduces no breaking changes to APIs or existing models. The primary impact is on the long-term trajectory and stability of the AI ecosystem. By residing within the PyTorch Foundation, Safetensors is positioned for tighter integration with PyTorch core and other hosted projects like vLLM and DeepSpeed. The project's roadmap now includes collaborative efforts to develop device-aware loading for GPUs, native APIs for parallel processing, and formal support for emerging quantization formats, solving systemic challenges for the entire industry rather than within a single company's silo.

Hugging Face's decision to place Safetensors under the PyTorch Foundation's neutral governance is a strategic move to cement its format as the de facto industry standard for model weight distribution, mitigating concerns of single-vendor control and encouraging wider, deeper integration across the AI stack.