The last artificial intelligence in becoming a sensation has been Deepseek. China has shaken Silicon Valley To produce Results comparable to those of OpenAI models and other technological ones with a much lower cost. Deepseek is also open sourcelike the models of Goal and some of Google and Microsoftwhich means that it can be freely downloaded, it has a MIT license that allows commercial use without restrictions, audit, modify and also run locally on the user’s computer. This is an important advantage, since, contrary to what happens when using the chatbot via web or app, User data will not end up stored on servers in China.
In this article we will explain how you can download and install the language model easilya procedure that will also help you Try other models of finish, Mistral, Microsoft or Google, among other suppliers, which are open source.
To do so we will use LM Studioa program available for Windows, Mac and Linux which facilitates the execution of large language models (LLM) locally on the user’s computer, both desktop and laptop. The application gives access to a series of available models and simplifies the discharge and installation processin addition to providing an interface similar to that used by chatbots online as Chatgpt and company.
Why a distilled model instead of complete Deepseek
It should be noted that the Deepseek that is going to be able to use with LM Studio is a ‘distilled’ version of the complete original model. This is, a smaller model trained to learn to reproduce the responses of a larger modelusing the adjustments and fine-tuning or fine adjustment of the latter. The ‘distilled’ models usually have less parameters, more optimized, which reduces the consumption of resources and allows them to execute them more efficientlywithout losing too much of the capacity of the original model.
This, more than optional, It is necessary to be able to execute a language model as Deepseek on a consumer computer. The most powerful version, Deepseek-R1: 671bhas 671 billion parameter and weigh 120 GB After 80 % being compressed from 720 GB original On the contrary, the distilled models of Deepseek do not reach 5 GB nor exceed the 8 billion of parameters.
In addition, there are the capacities of the laptops and desktop computers that fall short for a model of the size of R1: 671b. To test it, the technological youtuber Matthew Berman used the cloud service of Vultr which gave access to a server with an AMD EPYC 9534 processor of 128 cores, 2.32 TB of RAM, 8 SSD NVME and 8 AMD Instinct MI300X graphics cards of 192 GB each. What is colloquially known as A ‘NASA computer’.
With the distillate Deepseek and in a standard consumption PC that is not recent and does not have a CPU with NPU (processors dedicated to the processing of AI tasks), Almost all the processing will be carried by the graphics card. And it is recommended that it be dedicated and have enough Vramlike memory RAM The team. As a reference, testing the model in a team with 32 GB of RAM and an AMD Radeon Rx 580 graph with 8GB of VRM, It is slower to respond than its counterpart on the web and app, but it is perfectly functional. Of course, with a processing load in the GPU almost 100%.
That is why distilled, lighter and more efficient models are necessary. Now, if you want to launch yourself to the adventure and try R1: 671byou can do it through Anywhere Llmanother program similar to LM Studio, although a little less friendly to the beginner user, but than Yes you can download it by importing it from Ollama. This is another tool designed to run LLMS locally, but with much more arid operation for the average consumer and that forces to interact with AI through commands with the Windows terminal. Once configured, Anywhere LLM also offers a friendly interface to use chatbots.
How to install Depseek on your computer
With LM Studio downloaded and installed on your computer, you must follow these steps:
- Open LM Studio.
- If it is the first time you use it, at the start it will offer you the option to download the model Call 3.1 8b of finish. You must Accept it to then be able to install other models. Once downloaded, it will give you the option of load it And you can move on.
- In the menu on the left with the icons, click on the magnifying glass, Discover.
- In the window that opens, in the section Model Searchyou can search and select among several available models. You enter the search box at the top Deepseek. You will have two corresponding options with R1: Deepseek R1 Distill (Qwen 7b) and Deepseek R1 Distill (Call 8b). Either of the two is valid, the second being a bit heavier, which has one billion more parameters. After selecting, click on the green button that says Download.

- An emerging notification will inform you that the download has ended and will give you the option of loading the model. Click Load Model.
- With the already loaded model, click on the option Chat of the main menu, located on the left side.
- Now you are in the conversation window with the AI, but you must choose a first model. Click Select a Model To Loadthe button at the top of the window, and select the Deepseek.

- LM Studio will show you some model configuration options. Click Load Model And you can start chatting with AI. The logo that will show you at the beginning will be that of LM Studio, not that of Deepseek, and to speak to you in Spanish, you just have to do it first and leave English.