难道deepseek真牛B得超过了美国?1年内哪里找那么多天才,还没有出国留学回归的

现在热门话题是美国怎么应对

拿知识产权违法乱纪理由说事,纯粹就是面子工程,啥用都没有,CHATGPT给API付费用户显示的内容,就都成了它OPENAI的IP资产了???:cool:看看高等法院评判一下合理??

美国有几个清醒人都说了,如果按“盗用"别人产权的理由, 那所有的AI 模型的开发者都有同样的问题,都是罪犯。

看到一个美国清醒人说,如果美国想遏制DEEPSEEK。 很简单,OPENAI 把他现在做的,包括O1也开源了,那基本能继续维持他自己为主的生态链。这都是理想主义了,在美国都是资本家,OPENAI的股东权益有上千亿美元了。资本贪婪的本性下,谁会把所有的东西开源了? 不会这么做的。那后果是OPENAI的接受程度就逐步被DEEPSEEK替代。

对的。AI都是在全球网站上获取数据,没有得到授权。
 

OpenAI’s o1 model uses chain-of-thought reasoning but doesn’t show users what is happening behind the scenes, Qiao said. Taking it a step further, the reasoning DeepSeek’s model produces can be used to train a smaller AI model, she added.

华尔街日报说,OpenAI’s o1是个黑盒子,不会告诉你推理过程,而DS 明确告诉你详细推理过程,并且说D S才是适合训练别的小AI model


Naveen Rao, vice president of AI at San Francisco-based Databricks, which does not use the technique when terms of service prohibit it, said that learning from rivals is "par for the course" in the AI industry. Rao likened this to how automakers will buy and then examine one another's engines.
"To be completely fair, this happens in every scenario. Competition is a real thing, and when it's extractable information, you're going to extract it and try to get a win," Rao said. "We all try to be good citizens, but we're all competing at the same time."
 
最后编辑:
让子弹多飞一会
不需要让子弹飞吧。基于常识就可以判断是什么回事。 大国盛产亩产万斤,汉芯,基因编辑宝宝,姜萍。。。几乎没有意外。据说现在阿里的AI遥遥领先了。
 
这个可能是要用什么东西,都得先接受某个协议,协议中一般有很多小字,多数人可能都不看就接受了

估计chatGPT 的协议中小字包括不能用它来自行发展自己的模型

当然,违反了怎么办,那就是另一个问题,一般人也就是不能再用了

微软一边说调查,一边自己先用起来了,身体真踏实。

先是云端,然后还说要用在电脑上。有好东西完全免费不用才傻呢!如果微软都用了,还要质疑吗?

Microsoft also said customers would soon be able to run the R1 model locally on their Copilot+ PCs, a move that could potentially ease privacy and data-sharing concerns over the use of the model.

Meanwhile, Microsoft and OpenAI are probing if data output from OpenAI's technology was obtained in an unauthorized manner by a group linked to DeepSeek, Bloomberg News reported on Tuesday.
 
最后编辑:

微软一边说调查,一边自己先用起来了,身体真踏实。

先是云端,然后还说要用在电脑上。有好东西完全免费不用才傻呢!如果微软都用了,还要质疑吗?

Microsoft also said customers would soon be able to run the R1 model locally on their Copilot+ PCs, a move that could potentially ease privacy and data-sharing concerns over the use of the model.

Meanwhile, Microsoft and OpenAI are probing if data output from OpenAI's technology was obtained in an unauthorized manner by a group linked to DeepSeek, Bloomberg News reported on Tuesday.
开源,便宜啊
 
Local deployment 需要自己一点点训练?跟养电子宠物差不多吧:)
 
Local deployment 需要自己一点点训练?跟养电子宠物差不多吧:)

旧电脑,RASPBERRY PI 上就能RUN R1 8B model, 只要有8G RAM, GPU不是必须,有了会快点。

想想在RASPBERRY PI 这样的单板机RUN起来,这AI的应用是怎样。
 
最新的传说是DS确实绝大多数用的是H800, 但NVDA 在卖H800的时候,偷偷给DS 工具解锁了H800的算力。:)


看来NVDA老黄是 老共埋在美帝心脏的余则成?

上周在DS公布R1前,老黄还飞到北京,其中有一张和目前机器人大腕宇树科技CEO 的合影,那个亲切,看老黄那眼神就觉得NVDA和宇树的关系非常暧昧 :cool:
 

微软一边说调查,一边自己先用起来了,身体真踏实。

先是云端,然后还说要用在电脑上。有好东西完全免费不用才傻呢!如果微软都用了,还要质疑吗?

Microsoft also said customers would soon be able to run the R1 model locally on their Copilot+ PCs, a move that could potentially ease privacy and data-sharing concerns over the use of the model.

Meanwhile, Microsoft and OpenAI are probing if data output from OpenAI's technology was obtained in an unauthorized manner by a group linked to DeepSeek, Bloomberg News reported on Tuesday.
边调查边使用,都合法
 
旧电脑,RASPBERRY PI 上就能RUN R1 8B model, 只要有8G RAM, GPU不是必须,有了会快点。

想想在RASPBERRY PI 这样的单板机RUN起来,这AI的应用是怎样。

AMD发布了模型运行的参数,最大“DeepSeek-R1-Distill-Qwen-32B”,

The Ryzen AI Max+ 395 can support up to “DeepSeek-R1-Distill-Llama-70B”, but only in 128GB and 64GB memory capacities. The 32GB supports up to “DeepSeek-R1-Distill-Qwen-32B”.


Nvidia is in serious trouble when it comes to AI Model execution. Both Apple & AMD are offering compute platforms with up to 128GB of RAM that can execute VERY LARGE AI models. NVidia cannot touch the price/performance of these machines and apparently they have no plans to create a competing product anytime soon. It's for this reason that I bought my son a 48GB MacBook Pro M4Pro laptop - the ability to run larger AI models.

This weakness in NVidia hardware is also causing Mac Mini sales to skyrocket because you can put 64GB of RAM into an M4Pro model and run 64GB models that the 5090 will NEVER run for $2699.
 

AMD发布了模型运行的参数,最大“DeepSeek-R1-Distill-Qwen-32B”,

The Ryzen AI Max+ 395 can support up to “DeepSeek-R1-Distill-Llama-70B”, but only in 128GB and 64GB memory capacities. The 32GB supports up to “DeepSeek-R1-Distill-Qwen-32B”.


Nvidia is in serious trouble when it comes to AI Model execution. Both Apple & AMD are offering compute platforms with up to 128GB of RAM that can execute VERY LARGE AI models. NVidia cannot touch the price/performance of these machines and apparently they have no plans to create a competing product anytime soon. It's for this reason that I bought my son a 48GB MacBook Pro M4Pro laptop - the ability to run larger AI models.

This weakness in NVidia hardware is also causing Mac Mini sales to skyrocket because you can put 64GB of RAM into an M4Pro model and run 64GB models that the 5090 will NEVER run for $2699.

我亲自在没GPU卡的10几年旧的INTEL I5 8G RAM的电脑装了R1 8B, 可以RUN, 可以顺利“做对”G12 高中数学题。 周日晚实验结果一出来,我就知道NVDA 股票肯定要瀑布飞流直下三千尺了。
 
从下午到现在DS的网站已经基本瘫痪了,即使是上周注册的用户都显示BUSY了 :mad:
 
边调查边使用,都合法

Scientists are flocking to DeepSeek-R1, a cheap and powerful artificial intelligence (AI) ‘reasoning’ model that sent the US stock market spiralling after it was released by a Chinese firm last week

“Based on its great performance and low cost, we believe Deepseek-R1 will encourage more scientists to try LLMs in their daily research, without worrying about the cost,” says Huan Sun, an AI researcher at Ohio State University in Columbus. “Almost every colleague and collaborator working in AI is talking about it.”

Since R1’s launch on 20 January, “tons of researchers” have been investigating training their own reasoning models, based on and inspired by R1, says Cong Lu, an AI researcher at the University of British Columbia in Vancouver.

R1 is also showing promise in mathematics. Frieder Simon , a mathematician and computer scientist at the University of Oxford, UK, challenged both models to create a proof in the abstract field of functional analysis and found R1’s argument more promising than o1’s
 
后退
顶部