However, the industry is still largely controlled by companies with access to cutting-edge GPUs from NVIDIA and vast amounts of proprietary data. Dr James Kang, Senior Lecturer in Computer Science at RMIT Vietnam, highlights this imbalance.
“Big Tech has a significant advantage because of their financial resources and access to high-performance computing hardware. But the emergence of open-source models like DeepSeek R1 proves that smaller AI firms can still drive innovation by focusing on niche applications and efficiency,” Dr Kang said.
Meanwhile, Dr Jeff Nijsse, Senior Lecturer in Software Engineering at RMIT Vietnam, believes that open-source AI will ultimately shape the future.
“All it takes is one powerful open-source model to inspire others. Startups can build on existing work and innovate, even with limited resources. The dominance of expensive, high-end GPUs won’t last forever, and as older hardware becomes available, AI development will become more accessible,” Dr Nijsse said.
DeepSeek is not alone in this open-source revolution. Dr Nijsse points out that other open-source models like Llama 3 and Qwen2.5-72B are also large-scale AI models trained using reinforcement learning. However, unlike DeepSeek R1, these models do not use the mixture-of-experts (MoE) architecture, a key feature that contributes to DeepSeek’s efficiency. Both models, along with DeepSeek, can be accessed on Hugging Face, a platform that enables developers to download and self-host AI models.
Privacy concerns and the challenge of trust
Despite its potential, DeepSeek faces hurdles in global adoption, particularly due to privacy concerns surrounding Chinese AI models. Many users fear that sensitive data could be exposed, leading to a reluctance in adopting these technologies.
“Shortly after its launch, privacy concerns pushed many users to self-host DeepSeek models,” Dr Nijsse said. “This is why platforms like Hugging Face are so popular – they allow developers to run AI locally, avoiding potential security risks.”