Can local LLMs like DeepSeek R1 potentially send prompts or data to servers in China? [closed]
I'm exploring the use of local large language models (LLMs) like DeepSeek R1 for my projects. However, I'm concerned about data privacy and security, particularly regarding whether these models could potentially send user prompts, responses, or metadata to external servers, especially to China.
What are the potential ways in which a locally deployed LLM could send data externally? I'm looking for technical insights or examples of mechanisms through which local LLMs might transmit data without the user’s knowledge.
I'm using LM Studio
with llama.cpp
and started wondering since I also had the misconception that filter would not be present in the local version, cause they are some "post processing", but if you ask it about a specific square in china it won't answer locally.
(Deepseek and China here are just examples and can also be replaced by Llama, Meta and USA, ...)