• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA stock to lose $400 billion ( US tech 1 Trillion) after DeepSeek release

So what do you believe the long term solution (Year 2040+) will be when AGI is reached in A.I.? And then combining that with robotics that will be physically working in your average store in the 2030s?
We will certainly one big societal upheaval sooner or later but AI will for sure be embedded into the fabric of society - delivery, dogs, waiters, workers. Factories will produce factories via automated pipelines, AI will do the research and so on. Wars will be waged using machines and so on. However I don't expect things like that to be everywhere - certain countries and areas will stay relatively backward, albeit with mobile phones and digital currency. Hell, in my own home country hospitals still don't use computers much - still rely on pens and paper.

Also history tells that human tribes migrate and invade various countries so even highly develop societies might collapse. There are also potential natural causes like earthquakes and so on.
 

ResurrectedContrarian

Suffers with mild autism
Overall, it's still riding on US research, so I don't share the doom perspective at all.

It's one more company innovating, which happens to be outside the US, but if you read their papers you'll find all the usual US-based seminal papers cited, prior models, RL techniques pioneered by OpenAI, etc.

We saw some innovation going on in France with Mistral, but that didn't stay at the cutting edge; likewise we see another innovative company here in China, but it is far from the case that somehow the balance shifts because of it.
 

diffusionx

Gold Member
I agree UBI can't work for the reasons you presented. But it's the only thing we will have LOL! What other solution will there be? If many won't implement HUGE regulations to protect human societies?
We are going to string up the oligarchs, put their heads on a pike, ban AI, and execute anyone who tries to restart it.

That is what is going to happen because it's obvious they don't have any solutions to any of the problems they are so eager to cause.
 
Last edited:

IDKFA

I am Become Bilbo Baggins
I have no idea what young folks will do for work in 5-10 years.

Not just young people. Anybody working, regardless of age and experience could lose their jobs.

My role as a data engineer could easily be replaced by AI in a few years. All that time I spend monthly trawling through data, identifying trends, writing reports, presenting data and action plans etc could be done by AI in a fraction of the time. I can see it happening soon as well. What was the point of all that strudying, all that time learning excel, SQL and Python. All those hours spent working on my orator skills to present the data and push back against key stakeholders. Years and years wasted to be replaced by a machine.

C4NF_k.gif

UBI isn't the answer as people will only be given another money to meet their basic needs. People need a purpose in life and a majority don't want to sit around and not be useful. Oh, but this will give people the chance to be more creative and artistic. They can take the UBI money, then spend the time writing a book, or painting landscapes to sell for money. Well tough shit, because AI will also take that away from us as well. You know that novel you've been writing for 10 years? Well a LLM can crank out the same novel in a day.

I think if enough people are impacted then we'll start to see major push back from the public to fight for their jobs and they're humanity.
 

ResurrectedContrarian

Suffers with mild autism
Just need that code to be forked so it can learn about Tiananmen Square
It "knows" all about Tiananmen Square, it simply won't speak about it.

The censorship on these models happen at the behavioral level in reinforcement-based post training, not at the level of the massive corpus of training text.

Unlearning the behavioral modes it has been given via additional reinforcement learning would be easy.
 

A.Romero

Member
Fair enough, It’s already known it’s trained by using other models and known it was created for speed optimization. You can ask it that and it will tell you that itself. It won’t tell you what other ai models were used but will refer you to a faq. But what is its scalability with the more complex routines. Are we saying that the optimization was so bad in the other models that a bunch of recent graduates that were 1 to 2 years out of school were able to produce something better. I think what we are seeing is that a.i. models will all be specialized in routine training and usage. Also what validation have we on how much was really used to train and operate.
The document includes what models were used as a reference, they include a destiled version of Llama.

We are saying that optimization is a continuous process and yes, every iteration is expected to improve, specially when there is so much money involved. They took previous models and optimize it like everyone else is doing in some form or another. Regarding their experience I would say that computer science innovation doesn't always comes from the most experienced minds. Page and Brin were 21 and 22 when they started working on BackRub (it became Google later on) and by the time they were about 22 and 23 they exceeded the bandwith Stanford could provide.

I understand you are skeptic, I myself don't have the tools to test and make sure those benchmarks are real but I also don't think a benchmark that has been continously changing and being led by different models would lie just because.

This particular model is open source so anyone can look into it and retrain it so if anything this would have more ways to be verified to others that are not open source such as OpenAi's models.
 

RCX

Member
It "knows" all about Tiananmen Square, it simply won't speak about it.

The censorship on these models happen at the behavioral level in reinforcement-based post training, not at the level of the massive corpus of training text.

Unlearning the behavioral modes it has been given via additional reinforcement learning would be easy.
Haven't read too deeply into it but assuming it's fully open source that should be more than possible right?

I've hoped since day one that open AI (not "openAI") would win the race. I'd rather everyone have this technology at miniumum cost than see it used and abused in a small number of hands.
 

readonly

Member
Not just young people. Anybody working, regardless of age and experience could lose their jobs.

My role as a data engineer could easily be replaced by AI in a few years. All that time I spend monthly trawling through data, identifying trends, writing reports, presenting data and action plans etc could be done by AI in a fraction of the time. I can see it happening soon as well. What was the point of all that strudying, all that time learning excel, SQL and Python. All those hours spent working on my orator skills to present the data and push back against key stakeholders. Years and years wasted to be replaced by a machine.

C4NF_k.gif

UBI isn't the answer as people will only be given another money to meet their basic needs. People need a purpose in life and a majority don't want to sit around and not be useful. Oh, but this will give people the chance to be more creative and artistic. They can take the UBI money, then spend the time writing a book, or painting landscapes to sell for money. Well tough shit, because AI will also take that away from us as well. You know that novel you've been writing for 10 years? Well a LLM can crank out the same novel in a day.

I think if enough people are impacted then we'll start to see major push back from the public to fight for their jobs and they're humanity.
We will create a new purpose which may end up being akin to the matrix. In the short term it will be painful but literally we will be in the pods and eat the bugs as they say. You may not want to live a virtual life but each new generation will be more and more willing to do it. Kinda like our generation is the Amish and the next generation embraces the "tech world".
 

MarV0

Member
If a person lost their life savings over one stock, they did something seriously wrong. $NVDA has been a great play for me in the past 18 months (6 figures in realized gains and more than that in unrealized gains). I'm happy to buy the dip here as I needed to replenish the shares I've sold off anyway.
Oh my we just found GAFs secret billionaire and market guru. Are you here out of curiosity to observe and laugh at our miserable peasant lives?
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Not just young people. Anybody working, regardless of age and experience could lose their jobs.

My role as a data engineer could easily be replaced by AI in a few years. All that time I spend monthly trawling through data, identifying trends, writing reports, presenting data and action plans etc could be done by AI in a fraction of the time. I can see it happening soon as well. What was the point of all that strudying, all that time learning excel, SQL and Python. All those hours spent working on my orator skills to present the data and push back against key stakeholders. Years and years wasted to be replaced by a machine.

C4NF_k.gif

UBI isn't the answer as people will only be given another money to meet their basic needs. People need a purpose in life and a majority don't want to sit around and not be useful. Oh, but this will give people the chance to be more creative and artistic. They can take the UBI money, then spend the time writing a book, or painting landscapes to sell for money. Well tough shit, because AI will also take that away from us as well. You know that novel you've been writing for 10 years? Well a LLM can crank out the same novel in a day.

I think if enough people are impacted then we'll start to see major push back from the public to fight for their jobs and they're humanity.

The CEO on Anthropic said this to the Wall Street Journal just last week!!!!




He's right............we'll need to have a real conversation as a human race that most aren't ready to have.
 
why it affects nvidia? shouldnt it affect openai if anyone at all?
if the chinese AI claims are legit, it means less hardware if required to produce current top-tier AI... which means fewer nvidia GPU sales.

but even if the chinese AI claims are legit (dont know yet), future top-tier AI may require tons of hardware... or it'll run on a potato, who knows.

timing's odd though... US announces $500 trillion on AI investment, then china shows it can be done cheaply. china trying to save US some money? dont think so.
 
Last edited:

Wildebeest

Member
He's right............we'll need to have a real conversation as a human race that most aren't ready to have.
I think that only really Silicon Valley nerds worry about their lives having no meaning if people don't think they are the most intelligent form of life in the universe.
 

Kamina

Golden Boy
Buy the dip
Earnings call is coming up, so either more buying or market goes brrrr
 
Last edited:

TheAssist

Member
Not understanding the connection. Doesn't DeepSeek use Nvidia GPUs as well behind the scenes?
One of the reasons (probably not the only one) is pricing.

OpenAI wants 60 bucks per 1 million output tokens generated with o1.
DeepSeekR1, which generates equivalent quality, wants 2,19.

At this price the profit margin goes down the drain. OpenAi wasnt profitable before, at these prices its questionable whether all that investment in hardware by Nvidia was worth it. Big US companies like Meta, Amazon, Google and MS have already spent that money hoping they can dictate the pricing for years to come and make it a profitable business.

R1 is open source, so everyone can spin up their servers and hence prices might go down even further (though R1 might be supported by the Chinese state for all we know, just to break the oligopoly on US Ai companies).

So yeah Nvidia can sell to a broader audience (small and medium companies can now quiet easily run very powerful distilled models locally on their premise and don't need Cloud providers like google or amazon), but Nvidia wont be able to dictate pricing as much, because when the margin with AI services goes down, no one can pay these sky high fantasy prices anymore.

Basically R1 has become the new ground level, since its open. There is no need to use anything lesser than it (or v3 for that matter). In order for US companies to keep charging high prices for their API's they need to heavily outperform with their newer models. but news of the past few weeks looked like the new models might underperform. But we will see.
 

viveks86

Member
One of the reasons (probably not the only one) is pricing.

OpenAI wants 60 bucks per 1 million output tokens generated with o1.
DeepSeekR1, which generates equivalent quality, wants 2,19.

At this price the profit margin goes down the drain. OpenAi wasnt profitable before, at these prices its questionable whether all that investment in hardware by Nvidia was worth it. Big US companies like Meta, Amazon, Google and MS have already spent that money hoping they can dictate the pricing for years to come and make it a profitable business.

R1 is open source, so everyone can spin up their servers and hence prices might go down even further (though R1 might be supported by the Chinese state for all we know, just to break the oligopoly on US Ai companies).

So yeah Nvidia can sell to a broader audience (small and medium companies can now quiet easily run very powerful distilled models locally on their premise and don't need Cloud providers like google or amazon), but Nvidia wont be able to dictate pricing as much, because when the margin with AI services goes down, no one can pay these sky high fantasy prices anymore.

Basically R1 has become the new ground level, since its open. There is no need to use anything lesser than it (or v3 for that matter). In order for US companies to keep charging high prices for their API's they need to heavily outperform with their newer models. but news of the past few weeks looked like the new models might underperform. But we will see.
Guess I'm going to ask my VP of machine learning WTF he is doing not switching to R1 already. If we can spin up our own servers, we can make it FedRAMP compliant too (though I'm probably going to have to research that more)!


the china show said the deep seek claims are fake news🤷🏼‍♂️

Wait... this could still be fake? No one in the west has tested it?
 
Last edited:

Go_Ly_Dow

Member
I won't be touching the tech stocks because if there's truth to the claims about costs/efficiency then the valuations given to the US tech companies during the AI bubble still remain very high and they have further to fall. So far nothing has been properly discredited.

If the the claims are smoke and mirrors then the AI bubble and boom continues, probably to record highs.

So I'll watch this play out from the sidelines, aka not investing in tech companies because of the volatility!
 
Last edited:

Fabieter

Member
why it affects nvidia? shouldnt it affect openai if anyone at all?

Man it all sucks. These ceos and companies getting rich with fake fucking non existing money and we have to work like idiots.
Stock is essentially vibe check gambling for rich people

It's not just nvidia. Most chip companies are down including clould infrastructure companies and energy provider.
 

StereoVsn

Gold Member
One of the reasons (probably not the only one) is pricing.

OpenAI wants 60 bucks per 1 million output tokens generated with o1.
DeepSeekR1, which generates equivalent quality, wants 2,19.

At this price the profit margin goes down the drain. OpenAi wasnt profitable before, at these prices its questionable whether all that investment in hardware by Nvidia was worth it. Big US companies like Meta, Amazon, Google and MS have already spent that money hoping they can dictate the pricing for years to come and make it a profitable business.

R1 is open source, so everyone can spin up their servers and hence prices might go down even further (though R1 might be supported by the Chinese state for all we know, just to break the oligopoly on US Ai companies).

So yeah Nvidia can sell to a broader audience (small and medium companies can now quiet easily run very powerful distilled models locally on their premise and don't need Cloud providers like google or amazon), but Nvidia wont be able to dictate pricing as much, because when the margin with AI services goes down, no one can pay these sky high fantasy prices anymore.

Basically R1 has become the new ground level, since its open. There is no need to use anything lesser than it (or v3 for that matter). In order for US companies to keep charging high prices for their API's they need to heavily outperform with their newer models. but news of the past few weeks looked like the new models might underperform. But we will see.
I agree with most of this, but I don’t think this is correct:

“DeepSeekR1, which generates equivalent quality, wants 2,19.”

From what I have seen it’s not as good as o1 or Google’s 2.0 models, but the question is if it’s 80-90% as good but for 1/10th the price, doesn’t matter?

And yeah, I absolutely think CCP is supporting them. There is no way a Chinese company would publish such technical information without CCP approval, same way with Tencent’s or Baidu’s models.
 

Buggy Loop

Member
if the chinese AI claims are legit, it means less hardware if required to produce current top-tier AI... which means fewer nvidia GPU sales.

The Watt steam engine greatly improved the efficiency of coal-fired steam engines. It made coal a more cost-effective power source....
Leading to to an increased use of the steam engine in a wide range of industries and thus increased total coal consumption

When cars get fuel efficiency improvements, researchers find that peoples have higher yearly travels distances

A modern example, LED lighting. Highly power efficient LED light should lower global energy consumption for lighting but its the opposite, peoples install more lights for extended periods.

As graphic cards become more and more power efficient on OPs/Watts, they still climb the wattage every generations.

An assembly cross-compiler in late 70's was so efficient and quick that when demonstrated to management, the software manager vetoed it because he would he would lose the majority of his 50 programmers. The same company implemented it in another office and the team actually expanded because the costs of development were so cost effective that rather than just supplying to usual companies buying 100+ licenses, they also opened to new target customers who would need just a few and they ended up doubling the number of programmers.

Every day, for decades now, there's a new software / hardware coming in that changes the language, framework, set of libraries and make programming more efficient. But the demand for programmers continues to increase because by and large they are using new and better tools. Tools that reduce cost of software development but expands the market for new applications.

AI ain't slowing down. Like rofif rofif said, the critique is more towards the efficiencies of openAI to exploit the GPU farms than the company selling shovels in a gold rush but even that is not seeing the big picture of what these AI companies are doing. "Billions" compared to $6M chinese! omg! Yea... guys, big tech models are a lot more broad and trained for daily tasks than a distilled model like DeepSeek.

Its not like suddenly all the research is gone by the big companies either, its benchmarks but model to model the applications vary greatly, especially with closed models. OpenAI or Meta aren't running these GPU farms on a single model for benchmarks, they are trained for taking over human jobs. A DeepSeek-v3 is not gonna suddenly beat openAI o1 for medical and genetics because it was a distilled trained model and cheap, more similar to the likes of GPT-4o-mini or Qwen2.5-72B. Just recently DeepSeek-V2.5 was at GPT-4o-mini score vs API price. Things move. The costs is what DeepSeek excels at for now but again, as I specified earlier in the day, there's gonna be issues with companies using deepseek as they use sensitive data for further training which is a big no-no for most professionals/business. Feb 8th they are also changing pricing on -V3 so things indeed move in AI world.
 
Last edited:

IDKFA

I am Become Bilbo Baggins
China won't regulate so USA won't regulate

The dangers of not being first to singularity is more scary to them than any ethical dilemmas.

That's a dangerous game.

Once we hit the singularity, there is then no going back. Pandora's box would have truly been opened, but without careful planning and worldwide cooperation, it could easily be the last invention we ever make.
 

Buggy Loop

Member
That's a dangerous game.

Once we hit the singularity, there is then no going back. Pandora's box would have truly been opened, but without careful planning and worldwide cooperation, it could easily be the last invention we ever make.

yup

Not saying it’s a good move forward but realistically, they ain’t slowing down. If it comes to slowing down in public eyes, that’s one thing but behind the curtain they’ll be 100% continuing.
 

viveks86

Member
the china show said the deep seek claims are fake news🤷🏼‍♂️

Lol wtf is the china show?

Edit: 🤣 that’s actually the name. Haha


if the chinese AI claims are legit, it means less hardware if required to produce current top-tier AI... which means fewer nvidia GPU sales.

but even if the chinese AI claims are legit (dont know yet), future top-tier AI may require tons of hardware... or it'll run on a potato, who knows.

timing's odd though... US announces $500 trillion on AI investment, then china shows it can be done cheaply. china trying to save US some money? dont think so.
Seems legit guys. Engineers are testing it out in the US on our own servers as we speak. Here is directly from my VP of machine learning. This is literally what he gets paid for. I'm quoting verbatim. Not posting a screenshot as I'm not comfortable with it on the internet.

Me: Hey have you looked into Deep seek R1 yet?

Him: :) yes. The model came out last week.

Me: Are the performance numbers and costs what they claim to be?

Him: Yes. Although I am surprised why the market is reacting to it this way today. By stock market standard, it is a week (eternity) old news. Why did it catch your interest?

Me: Seeing it hit mainstream news with the Nvidia price drop. Didn't know anything about it till then
Him: They spent $5 million to crater Nvidia by $600 B :)
 
Last edited:

E-Cat

Member
The research lead at xAI knows better than the midwits (the bolded part about chain-of-thought is really important):
The market is almost never rational in the short term. I have no stake in $NVDA (except through ETFs), but i think more efficient/capable models are great for $NVDA. The bottleneck is still capability/usefulness of the models. If they become better/cheaper, demand will surge further. Besides that CoT adds the necessity - and usefulness- of extra compute at inference time.
 
Last edited:

IDKFA

I am Become Bilbo Baggins
I think we're delving into sci-fi territory here lol

Not really.

Reaching the singularity means a loss of control. Once we hit this milestone, it means a superintelligent AI system could take over the process of invention and innovation.

This used to be sci-fi, but now it's a very real possibility.
 

viveks86

Member
I think we're delving into sci-fi territory here lol

Not really.

Reaching the singularity means a loss of control. Once we hit this milestone, it means a superintelligent AI system could take over the process of invention and innovation.

This used to be sci-fi, but now it's a very real possibility.

Yeah people are getting a bit carried away. This is just usual leapfrogging with technology. My company is still training these AI models to properly read a goddamn pdf file with flowcharts in it. Singularity can (and likely will) happen, but not for a couple of decades. By then I'll retire and be fed by my robot slaves... or be food for them. Whatever they prefer.
 
Last edited:

GHG

Gold Member
Jesus Christ. Talk about missing the timeline of AI.

So cute

Yup. LLM is the end game. That's it. AI stops there. Was a good run. OpenAI and Stargate was just aiming for youtube AI videos.

That's not the point I'm making, and youtube videos is not the typical use case for the current form of AI, but please continue to go off.

The point is that there is no end game, but anything that gets us progress in a more effiecent way is a net positive and businesses will use that to their advantage, especially if they can do so while reducing capex, even if it means it ends up being a disadvantage to Nvidia.

Best - inference - no - matter - the - vendor

DeepSeek-V3 is dependent as much as any previous model before it on inference speed.



AMD always claims higher bandwidth than Nvidia. The opposite of your DeepSeek claim you made that "no longer need to make sure you have access to the massive bandwidth Nvidia". I discover that you do not know what you're talking about.

They claim higher bandwidth on an individual per chip basis but here's the thing that you're missing - I specifically mentioned Nvidia's vgpu which is specifically where their advantage lies in all of this. At scale nobody uses these chips individually. That is where Nvidia are substantially ahead - nvlink gets you up 900 GB/s interconnect whereas infinity fabric on AMD's side tops out at 128 GB/s. These are the things that should matter less moving forward.

your maths are something else

Don't take it from me.

nq0WUxF.png


Less efficient for what GHG? Which tasks are you trying to program? Or benchmark?

Unsloth AI last year raised 2.2x inference in LLM with 70% less RAM usage by using converting sequential code paths into parallel ones using Triton kernels.

Draw your own conclusions from the data:

vxKqptE.jpeg


Ug2eZFq.jpeg
QkjZgVb.png


Is your whole premise of Nvidia getting 85% of market share because of Cuda? Look at the big tech firms. None are using cuda. You're missing the whole reason why Nvidia is selling like it does. Total cost of ownership. Not Cuda. Even with AMD getting better theoretical number on papers, their racks and implementation at these big AI farm levels is total shit.

They are all ultimately using Cuda. What are you talking about? Assuming they are running Nvidia GPU's what is their high level language or choice ultimately compiling in to? Triton literally compiles to CUBIN and you want to reference that while also saying they aren't using cuda?

Nvidia's early investment in the space and their robust ecosystem and software stack is the reason why they are so dominant currently, not total cost of ownership.
Please read the following as an example (specifically, you might find the bolded interesting):

Key Findings​

  1. Comparing on paper FLOP/s and HBM Bandwidth/Capacity is akin to comparing cameras by merely examining megapixel count. The only way to tell the actual performance is to run benchmarking.
  2. Nvidia’s Out of the Box Performance & Experience is amazing, and we did not run into any Nvidia specific bugs during our benchmarks. Nvidia tasked a single engineer to us for technical support, but we didn’t run into any Nvidia software bugs as such we didn’t need much support.
  3. AMD’s Out of the Box Experience is very difficult to work with and can require considerable patience and elbow grease to move towards a usable state. On most of our benchmarks, Public AMD stable releases of AMD PyTorch is still broken and we needed workarounds.
  4. If we weren’t supported by multiple teams of AMD engineers triaging and fixing bugs in AMD software that we ran into, AMD’s results would have been much lower than Nvidia’s.
  5. We ran unofficial MLPerf Training GPT-3 175B on 256 H100 in collaboration with Sustainable Metal Cloud to test the effects of different VBoost setting
  6. For AMD, Real World Performance on public stable released software is nowhere close to its on paper marketed TFLOP/s. Nvidia’s real world performance also undershoots its marketing TFLOP/s, but not by nearly as much.
  7. The MI300X has a lower total cost of ownership (TCO) compared to the H100/H200, but training performance per TCO is worse on the MI300X on public stable releases of AMD software. This changes if one uses custom development builds of AMD software.
  8. Training performance is weaker, as demonstrated by the MI300X ‘s matrix multiplication micro-benchmarks, and AMD public release software on single-node training throughput still lags that of Nvidia’s H100 and H200.
  9. MI300X performance is held back by AMD software. AMD MI300X software on BF16 development branches have better performance but has not yet merged into the main branch of AMD’s internal repos. By the time it gets merged into the main branch and into the PyTorch stable release, Nvidia Blackwell will have already been available to everyone.
  10. AMD’s training performance is also held back as the MI300X does not deliver strong scale out performance. This is due to its weaker ROCm Compute Communication Library (RCCL) and AMD’s lower degree of vertical integration with networking and switching hardware compared to Nvidia’s strong integration of its Nvidia Collective Communications Library (NCCL), InfiniBand/Spectrum-X network fabric and switches.
  11. Many of AMD AI Libraries are forks of NVIDIA AI Libraries, leading to suboptimal outcomes and compatibility issues.
  12. AMD customers tend to use hand crafted kernels only for inference, which means their performance outside of very narrow well defined use cases is poor, and their flexibility to rapidly shifting workloads is non-existent.



That's nothing new and has nothing to do with inference speed or computational parallelism. They went to the metal with near assembly language on GPU. What does that have to do with GPU vendor? There's a mountain pile of alternatives to Cuda.

What are these mountain piles of alternatives to Cuda then? And to be clear - I'm not talking about alternatives that still leverage or ultimately compile to Cuda.

Peoples pick Cuda / Pytorch not for the best performance, they pick it for ease of implementation and easy to code. Do you have any idea how it is to implement DeepSeek? No you don't. It doesn't remove the needs of broad and easy coding languages. This is equivalent to saying that we should never have made an API because assembly. Sure, nothing beats that, but good luck coding.

The ease implementation of deepseek is what remains to be seen. But it doesn't even matter much since the methodologies are now public and that's what matters most. Everyone is free to learn from what they've achieved and adapt/implement as they see fit. Which is why the likes of Meta and Apple are well positioned to take advantage of any benefits.

Yup

Also a business dimwit that has never heard of the Jevons paradox. Its quite cute.

Well I'm just glad papa Satya taught you something new today.



Sorry your V8 supertruck is at risk of being surpassed by a little 4 pot turbo from China. These kinds of butthurt reactions are always entertaining.
 
Last edited:

GHG

Gold Member
Seems legit guys. Engineers are testing it out in the US on our own servers as we speak. Here is directly from my VP of machine learning. This is literally what he gets paid for. I'm quoting verbatim. Not posting a screenshot as I'm not comfortable with it on the internet.

Me: Hey have you looked into Deep seek R1 yet?

Him: :) yes. The model came out last week.

Me: Are the performance numbers and costs what they claim to be?

Him: Yes. Although I am surprised why the market is reacting to it this way today. By stock market standard, it is a week (eternity) old news. Why did it catch your interest?

Me: Seeing it hit mainstream news with the Nvidia price drop. Didn't know anything about it till then
Him: They spent $5 million to crater Nvidia by $600 B :)

It would actually be more concerning if today's move in Nvidia's stock (and other impacted semiconductor stocks) wasn't actually because of deepseek.

I'm not seeing any other reason for it and would put the delay down to the claims needing to be verified because of their origin.

Based on today's activities, the benefactors of this news all mostly recovered or even ended green.
 
Last edited:

ProtoByte

Weeb Underling
Not really.

Reaching the singularity means a loss of control. Once we hit this milestone, it means a superintelligent AI system could take over the process of invention and innovation.

This used to be sci-fi, but now it's a very real possibility.
I just don't buy that AI will be able to infinitely code itself with no human intervention. Generally, I believe that the more complex the system is, the more fragile. Trip up in the wrong place will lead to cascade failures.
 

viveks86

Member
I just don't buy that AI will be able to infinitely code itself with no human intervention. Generally, I believe that the more complex the system is, the more fragile. Trip up in the wrong place will lead to cascade failures.
Yeah basic human reasoning has not been emulated yet. Like I said earlier, we are spending days teaching it to read basic flow charts. AI code generation is simply going to make development faster. It will be yet another tool in the toolkit and can mayyyy be replace a handful of junior engineers that show no growth potential. Replacing $300K devs with AI? Not something I'm worried about for the rest of my career. Can it happen in the distant future? Who the hell knows. May be. May be we will invent time travel too and be visited by our future selves. Or aliens. It's all... possible...
 
Last edited:

Loomy

Thinks Microaggressions are Real
I always knew nvidia was inflated. good to see.

Everyone going all in on AI is equally moronic. AI is just a tool. its the products that matter.
The tool is still being assembled, and everyone wants to own a part of it.
 
  • Fire
Reactions: GHG
Top Bottom