Table of Contents
Who/What is DeepSeek and What Have They Done?
DeepSeek is a newer competitor in the AI industry originating in China. The organization has recently broken into unprecedented territory, crushing benchmarks for existing AI models at a fraction of the cost. Even OpenAI’s flagship models struggle to keep up. In fact, DeepSeek’s new R1 model even surpasses OpenAI’s 4o-1217 in certain tests. While the performance is impressive for an organization with relatively minimal funding, the more impressive aspect of this model is the cost. Quotes estimate that R1 can be more than 25 times cheaper than OpenAI’s flagship models.

Impact and Explanation?
This meant a few things. During pre-market activity on Monday morning, the U.S. markets were down in a major way. Fears of China’s leap over the United States in the AI race (especially with such little capital invested comparably) led to sell offs early in the day. At first, I was also caught up in the commotion – “How could they have advanced this much on such little funding when we had everything in our favor?”
These thoughts at 3AM kept me awake and I decided to dig in. After a bit of research I calmed, then grew confused. I began to see this development as a great thing and an asset for American businesses, so why are we panicking?
What Altered my Perspective?
As I dug in a bit and took some time to think, I realized a few key things:
- The R1 Model is open sourced
- This development marks a significant leap into the future of AI
- The training model used is replicable
What do those realizations actually mean though? Well, the R1 model being open source means that everyone can access it, understand the innerworkings of the model, and upgrade/modify/implement without oversight. If a model like this was developed and remained proprietary, I can see the concerns that may arise from falling behind our cohabitors.

Further Significance
This development’s significance cost reduction means that for a majority of the American businesses (who use AI rather than develop it), massive cost reductions and efficiency gains are in the works. Organizations have spent over $1 billion on consumer generative AI products in 2024. If you scale the potential 25x cost reduction across that massive number, you get immense savings that participating businesses can now realize.
The caveat here is the model and chip developers. I can imagine why these organizations may have fears, but even they can realize benefits from this development. For chip developers, the concern is that more efficient models means less need for compute and therefore less chips in demand. The fallacy here is that we will always need more chips and we will continue building more advanced models and solutions, negating this dispute.
Model developers may be hit the hardest, needing to get up to snuff… but they were given the secret ingredient.

Not only is the DeepSeek R1 model open source, but DeepSeek provided insight into how the model was developed. It turns out, that same model training method can be used on the existing advanced models to not only advance them further, but make them more efficient. Now that whales like OpenAI had someone complete that research for them, they can take the practice and very quickly optimize their product using a proven method. All in all, I find this to be a great development for businesses and an innovation accelerator.
Concerns – Black Boxes
Despite the rosy view presented above, as I thought through this development I also developed new concerns that may become a factor down the road. Many of these models are “black boxes”. We don’t necessarily know all of the ins and outs of these models, but they just work.
The harm in that lies in the possibility of hiding intentional biases or ideology amongst the models and as they become used more broadly, those impacts scale and the biases and ideologies become baked into everyday life. The bad actors here could train a model on materials that skew the independence of the model. Bad actors could package a model ship it as ‘open source’ and no one would know what the model was originally trained on (if done correctly). This could be even more impactful and dangerous as we move closer and closer to AGI. Hopefully we begin to reveal more of the inner workings of these black boxes and can develop automated bias testing sooner rather than later.
Update 2/17/25 – In evaluating this model for fit at my work, the above fear was proven. In asking the model about the 1989 Tiananmen Square protests, the model failed to respond adequately. This was only found after extensive testing and nowhere in the physical code. This confirms the possibility of purposeful model bias in such models.
Your Thoughts – Internalization
Well, what do you think? Am I missing important context, going crazy, right on the money, somewhere in the middle? Is DeepSeek a good thing for innovation or a threat to our powerhouses exposing lack of creativity? Let me know your perspective!
Meta
This is my first post outside of the intro. I do not expect to post much on current news, but this was too enticing to me. I have other drafts in the works for more standard material, but I hope this prompts your thoughts nonetheless.
~ Quinn