[HN Gopher] Nvidia Announces Financial Results for Second Quarte...
___________________________________________________________________
Nvidia Announces Financial Results for Second Quarter Fiscal 2024
Author : electriclove
Score : 115 points
Date : 2023-08-23 20:24 UTC (2 hours ago)
HTML web link (nvidianews.nvidia.com)
TEXT w3m dump (nvidianews.nvidia.com)
| xnx wrote:
| The good new is that Nvidia's high GPU prices motivate everyone
| (Intel, AMD, ARM, Google, etc.) to try and tackle the problem by
| making new chips, making more efficient use of current chips,
| etc. For all the distributed computing efforts that have existed
| (prime factorization, SETI@Home, Bitcoin, etc.), I'm surprised
| there isn't some way for gamers to rent out use of their GPU's
| when idle. It wouldn't be efficient, but at these prices it could
| still make sense.
| NavinF wrote:
| You can do that for inference, but most gamers have a single
| GPU with <24GB VRAM which kinda sucks for training. 3090 or
| 4090 is the minimum to use reasonable batch sizes
| Uehreka wrote:
| They're all pretty motivated, they've been motivated for years,
| and almost nothing is happening. This situation isn't exactly a
| poster child for the Efficient Markets Hypothesis.
|
| Every year just sounds like "Nvidia's new consumer GPUs are
| adding new features, breaking previous performance ceilings,
| running games at huge resolutions and framerates. Their
| datacenter cards are completely sold out because they can spin
| straw into gold, and Nvidia continues to develop new AI and
| graphics techniques built on their proprietary CUDA framework
| (that no one else can implement). Meanwhile AMD has finally
| sorted out raytracing, and their consumer GPUs are... well not
| as good as Nvidia's but they're a better value if you're
| looking for a competitor to one of Nvidia's 60 or 70 line
| GPUs!"
| ericmay wrote:
| > This situation isn't exactly a poster child for the
| Efficient Markets Hypothesis.
|
| I'm unsure why you're criticizing the Efficient Markets
| Hypothesis or even using it here, but you need to also
| analyze this with some time horizon because the market and
| marketplaces are not static.
| willis936 wrote:
| Their description could be used to describe the situation
| in 2023, 2022, 2021, 2020, 2019, 2018, 2017, and 2016.
| lotsofpulp wrote:
| Efficient market hypothesis is unrelated to Nvidia's
| competitors being unable to offer a competing product so far.
|
| https://www.investopedia.com/terms/e/efficientmarkethypothes.
| ..
|
| > The efficient market hypothesis (EMH), alternatively known
| as the efficient market theory, is a hypothesis that states
| that share prices reflect all information and consistent
| alpha generation is impossible.
| IshKebab wrote:
| There have been various attempts but you need a workload that's
| basically public and also runs on a single GPU (because you
| don't have NVLink or similar).
| tric wrote:
| > I'm surprised there isn't some way for gamers to rent out use
| of their GPU's when idle.
|
| https://rendernetwork.com/
|
| "The Render Network(r) Provides Near Unlimited Decentralized
| GPU Computing Power For Next Generation 3D Content Creation."
|
| "Render Network's system can be broken down into 2 main roles:
| Creators and Node Operators. Here's a handy guide to figure out
| where you might fit in on the Render Network:
|
| Maybe you're a hardware enthusiast with GPUs to spare, or maybe
| you're a cryptocurrency guru with a passing interest in VFX. If
| you've got GPUs that are sitting idle at any time, you're a
| potential Node Operator who can use that GPU downtime to earn
| RNDR."
| tzhenghao wrote:
| > motivate everyone (Intel, AMD, ARM, Google, etc.) to try and
| tackle the problem by making new chips
|
| Yes, there has been repeated efforts to chip at Nvidia's market
| share, but there's also a graveyard full of AI accelerator
| companies that fail to find product market fit due to lack of
| software toolchain support - and that applies even for older
| Nvidia GPUs and their compatible toolchains, let alone other
| players like AMD. This isn't a hit on Nvidia, I'm just saying
| things move so quickly in the space that even the only-game-in-
| town is trying to catch up.
|
| Nvidia is also leading by being one or two hardware cycles
| ahead of their competition. I'm pretty confident AI workloads
| in enterprise is their next major focus [1]. I think this more
| than anything else will accelerate AI adoption in enterprise if
| well executed.
|
| To your point, I think the industry needs to focus more on the
| toolchains that sit right between the deep learning frameworks
| (PyTorch, Tensorflow etc.) and hardware vendors (Nvidia, AMD,
| Intel, ARM, Google TPU etc.) Deep learning compilers will
| dictate if we allow all AI workloads run on just Nvidia or
| several other chips.
|
| [1] - https://www.nvidia.com/en-us/data-
| center/solutions/confident...
| Conscat wrote:
| I am certain that several years ago, I was given an ad for
| exactly such a service and even tried it out, but I cannot for
| the life of me remember its name. It had some cute salad motif,
| and its users are named "chefs".
|
| EDIT: It was just named Salad. https://salad.com/
| https://salad.com/download
| WanderPanda wrote:
| With interconnect being the biggest limitation these days I
| don't think this would work.
| xnx wrote:
| I'm not familiar with all the varied uses of GPUs but it
| seems like image generation could feasibly be distributed:
| large upfront download of models, then small inputs of text
| and settings, and small output of resulting images.
| WanderPanda wrote:
| For inference I agree! But training requires centralized
| gradient steps
| vajrabum wrote:
| If you're in a data center and running large training
| jobs then RDMA over Nvidia Mellanox Infiniband cards over
| high speed ethernet (like 100GB) are used to ship
| coefficients around without having that transfer
| bottleneck in the CPU.
| xxpor wrote:
| 100 gig, that's considered cute nowadays.
|
| https://aws.amazon.com/blogs/aws/new-amazon-
| ec2-p5-instances...
|
| 3.2 terabits.
| Goronmon wrote:
| _The good new is that Nvidia 's high GPU prices motivate
| everyone (Intel, AMD, ARM, Google, etc.) to try and tackle the
| problem by making new chips..._
|
| Or their dominance leads to competition throwing in the towel
| and investing resources in a market with less stiff
| competition.
|
| I wouldn't be surprised to see AMD start to pair back
| ivnestment on high-end GPUs if things continue down this path.
| I would say Intel likely keeps pushing, but I'm less convinced
| they can actually make much headway in the near future.
| xnx wrote:
| As was mentioned in another thread on a slightly different
| topic, it wouldn't be surprising to see all non-Nvidia
| parties unit around some non-CUDA open standard.
| josemanuel wrote:
| Do you mean something like OpenCL?
| xnx wrote:
| Exactly. More resources might get applied to improving
| it.
| haldujai wrote:
| > I would say Intel likely keeps pushing, but I'm less
| convinced they can actually make much headway in the near
| future.
|
| It seems that Intel is making great headway on their fabs and
| may somehow pull off 5 nodes in 4 years. Intel 3 is entering
| high volume production soon and according to Gelsinger 20A is
| 6 months ahead of schedule and planned for H2 2024.
|
| If they do pull this off and regain leadership that would
| change outlook.
| myth_drannon wrote:
| vast.ai allows you to rent out gpu
| TheAlchemist wrote:
| What's also pretty interesting that they actually didn't sell
| more chips this quarter - they ... just pretty much doubled the
| prices (hence the huge margin).
|
| This is what having a monopoly looks like !
|
| This is also why companies that manufacture their cards didn't
| report any uptick in profits. I'm wondering how this play out in
| some months ? Do they have any pricing power with respect to
| NVidia ? Or NVidia could just switch to another manufacturer ?
| thfuran wrote:
| There probably isn't another manufacturer they can switch high
| end stuff to. They recently tried moving at least some of their
| cards to Samsung but switched back last generation due to yield
| issues.
| wmf wrote:
| You have to distinguish between fabs and AIBs.
| thfuran wrote:
| If they treat their AIBs for their enterprise stuff
| anything like they do in the consumer space, they don't
| really have anything to worry about there (aside from the
| rest of them giving up on dealing with Nvidia's BS, I
| guess).
| pb7 wrote:
| [flagged]
| parthdesai wrote:
| > Raising prices means you are a monopoly?
|
| Not sure if you're intentionally choosing to ignore their
| point, but what they meant is Nvidia can unilaterally choose
| to raise the prices and customers can't do anything since
| they're a monopoly. You can't just say well, i'll go to the
| next shop and buy something for cheaper.
| issafram wrote:
| Not that it's much better, but wouldn't it be a duopoly
| considering that AMD is also a big player?
|
| Hopefully Intel continues to improve it's GPU offerings
| tric wrote:
| > wouldn't it be a duopoly considering that AMD is also a big
| player?
|
| I don't think GPUs are commoditized. You can't swap a Nvida
| GPU with a AMD GPU, and get the same performance/results.
| midhir wrote:
| AMD seem to be catching up quickly lately. I'm running
| Stable Diffusion, Llama-2, and Pytorch on a 7900XTX right
| now. Getting it up and running even on an unsupported Linux
| distro is relatively straightforward. Details for Arch are
| here: https://gitlab.com/-/snippets/2584462
|
| The HIP interface even has almost exact interoperability
| with CUDA, so you don't have to rewrite your code.
| capableweb wrote:
| > Not that it's much better, but wouldn't it be a duopoly
| considering that AMD is also a big player?
|
| Not sure AMD would be considered a big player, what would be
| the percentage threshold for that?
|
| According to the Steam Hardware (& Software) Survey
| (https://store.steampowered.com/hwsurvey/Steam-Hardware-
| Softw...), ~75% of computers with Steam running has a NVIDIA
| GPU, while ~15% has a AMD GPU.
|
| AMD is the closest to a competitor NVIDIA has, but they are
| also very far away from even being close to their market-
| share.
|
| I'm sure in AI/ML spaces, NVIDIA holds a even higher market-
| share due to CUDA and the rest of the ecosystem.
| steno132 wrote:
| Nvidia's undervalued.
|
| Once enterprise adoption of AI picks up, demand for chips will
| increase 2-3 times further.
|
| I'm told Nvidia's building their own fab in Southeast Asia over
| the next few years. This will massively boost their output.
| kccqzy wrote:
| It remains debatable whether mass enterprise adoption of AI
| would happen first, or Nvidia's competitors coming up with
| equivalent chips would happen first.
| danielmarkbruce wrote:
| On the surface, it's not debatable. Enterprises are going
| full steam ahead on AI. Building out an ecosystem to
| challenge Nvidia seems like a decade long battle, if it's
| even possible.
| haldujai wrote:
| What is full steam ahead for enterprises? It's not like
| they're throwing autoregressive LLMs into production any
| time soon.
|
| In any case Nvidia is expecting to ship ~550k H100s in
| 2023, hardly enough to satisfy every user.
|
| Tesla decided to in-house. TPUv4 and Gaudi2 exceeded A100
| performance, they just never hit scale or the market and
| then Hopper added optimization for transformers rendering
| these chips relatively obsolete.
|
| Nvidia's lead is not unassailable and it seems incredibly
| unlikely that they would not face serious competition
| within the next 2-3 years given the $ being thrown around.
| danielmarkbruce wrote:
| Large enterprises are already putting them into
| production. I have direct experience with it.
|
| It's not unassailable. But it's going to take a lot to
| make _any_ difference to Nvidia 's volume or pricing, let
| alone a meaningful difference. They already face serious
| competitors in google and aws with TPU and inferentia,
| but those competitors are at a pretty big disadvantage
| for now (and others too). The cuda ecosystem is a big
| advantage. Nvidia has a lot of leverage with semi
| manufacturers because of volume. They spend way more on
| chip R&D than their competitors in the space. They have
| brand recognition. You can buy and own Nvidia chips v tpu
| and inferentia. It's... a tough road ahead for
| competitors.
| haldujai wrote:
| It's hard to imagine Nvidia will maintain what is right now
| effectively 100% market share for training forever,
| especially given the $ being thrown around.
| steno132 wrote:
| There's no competitor to Nvidia for the next 10 years.
|
| They've got a monopoly. And with AI's coming explosion, I'd
| wager 50/50 odds Jensen becomes the world's first
| trillionaire.
| johnvanommen wrote:
| > Once enterprise adoption of AI picks up, demand for chips
| will increase 2-3 times further.
|
| Possibly their greatest asset, as an investment, is their crazy
| high margins. Nvidia in 2023 is where Intel was in 2007, where
| they could basically charge almost any price because they were
| so dominant in the market. I remember when E5s were selling for
| $2000 a pop and data centers were using thousands of them.
| qwytw wrote:
| > will increase 2-3 times further.
|
| That and possibly way more than that is already priced in.
| Nvidia's stock is extremely expensive not because of they are
| making now (which is not a lot relative to valuation, they just
| barely surpassed Intel this quarter in revenue) but because
| investors expect pretty much exponential growth over the next
| few years..
| seydor wrote:
| It s come to the point that people are begging competitors to do
| something in the space. Who knows, maybe some cheap Chinese asic
| that can do matrix multiplication ends up eating their lunch.
|
| You d think that, at the level of capitalization of tech
| companies, competition would be cutthroat
| zapdrive wrote:
| There are a bunch of startups trying to develop AI GPUs.
| Someone linked them in a comment a few days ago.
| smoldesu wrote:
| You're kinda underselling what exactly Nvidia is doing right
| now. If any Chinese company could compete with something like
| the DGX GH200, they would be building GPUs for the PRC, not
| exporting them.
|
| There's also the problem of industry hostility, anyways. Even
| _if_ Nvidia was dethroned in the hardware-space, it 's unlikely
| their successor would improve the lock-in situation. It will
| take an intersectional effort to change things.
| TheAlchemist wrote:
| Capitalization in itself is meaningless. If you have 50% of
| NVidia outstanding shares, and you try to sell 10% of that, the
| capitalization would crater.
|
| What really counts is the profit. It is pretty huge now, but
| not 'that' huge (at least yet).
| pier25 wrote:
| So gaming is now less than 20% of their business? Holy shit.
| gigatexal wrote:
| A blockbuster quarter for sure with eps up 854%.
| TechnicolorByte wrote:
| Incredible company. It's absolutely insane how far ahead they are
| with the investments they made over a decade ago.
|
| So nice to see a "hard" engineering (from silicon to software)
| SV-founded company getting all this recognition. Especially after
| what has felt like a decade of SV hype software companies
| dominating the mainstream financial markets pre-pandemic with a
| spate of overpriced IPOs or large ad-revenue generating mega
| corporations.
| kccqzy wrote:
| The moniker of "hard" engineering is neither precise nor
| useful. What makes engineering hard? Is solving problems with
| distributed systems, even if these systems are for ads, hard?
| Or do you mean hardware? In that case even Nvidia is not hard
| enough since they don't fabricate their own chips. Or do you
| mean designing hardware? Then what makes writing system verilog
| at a desk hard but writing Python not hard?
| TechnicolorByte wrote:
| I admit that was a glib comment and unnecessary.
|
| I'm really speaking about Nvidia's ability to perform well in
| both hardware and software, at chip-scale and datacenter-
| scale. Also speaking of their product/business direction that
| revolutionizes multiple industries (leaders in graphics with
| ray tracing and AI frame/resolution sacking; leaders in AI
| infra and datacenter systems, etc.) all resulting in big
| impacts to their respective industries.
|
| You're right that many of those software-only companies do
| very real engineering with distributed systems and such. I
| should've been more precise and was really complaining about
| the SV hype of the 2010s focusing on regulating-breaking
| companies like Airbnb, Uber, wework, etc. and on companies
| like Meta and Google who focus on pushing ads for their
| revenue.
| omniglottal wrote:
| I suppose the difference is engineering something
| deterministic (i.e., physics, electronics, logic) versus
| something soft and indistinct (SEO, ad impressions, customer
| conversion rate).
| epolanski wrote:
| Are they so far ahead?
|
| AMD GPUs get comparable results as of late on Stable Diffusion.
|
| Software and hardware from competitors will catch up, crunching
| 4/8/16 bit width numbers is no rocket science.
| johnvanommen wrote:
| > Software and hardware from competitors will catch up,
| crunching 4/8/16 bit width numbers is no rocket science.
|
| I made the mistake of buying an A770 from Intel, based on the
| spec sheet. Hardware is comparable to what Nvidia is selling,
| for 70% of the price.
|
| It's basically a useless paperweight. The AI software crashes
| constantly, and when it's not crashing, it performs at half
| the level of Nvidia's cards.
|
| Turns out that drivers and software compatibility are a big
| deal, and Intel is way way behind in that arena.
| david-gpu wrote:
| > Software and hardware from competitors will catch up,
| crunching 4/8/16 bit width numbers is no rocket science.
|
| I used to think like that, until I got a job there and... Oh,
| boy! I left five years later still amazed at all the ever
| more mind bending ways you can multiply two damn matrices. It
| was the most tedious yet also most intellectually challenging
| work I've ever done. My coworkers there were also the
| brightest group of engineers I've ever met.
| smoldesu wrote:
| Nvidia has a small lead on the industry in a few places,
| adding up to _super_ attractive backend hardware options.
| They aren 't invincible, but they profit off the hostility
| between their competitors. Until those companies gang up to
| fund an open alternative, it's open season for Nvidia and HPC
| customers.
|
| The recent Stable Diffusion results are great news, but also
| don't include comparisons to an Nvidia card using the same
| optimizations. Nvidia claims that Microsoft Olive doubles
| performance on their cards too, so it might be a bit of a
| wash: https://blogs.nvidia.com/blog/2023/05/23/microsoft-
| build-nvi...
|
| Plus, none of those optimizations were any more open than
| CUDA (since it used DirectML).
|
| > crunching 4/8/16 bit width numbers is no rocket science.
|
| Of course not. That's why everyone did it:
| https://onnxruntime.ai/docs/execution-providers
|
| The problem with that "15 competing standards" XKCD is that
| normally one big proprietary standard wins. Nvidia has the
| history, the stability, the multi-OS and multi-arch support.
| The industry can definitely overturn it, but they have to
| work together to obsolete it.
| NickC25 wrote:
| Absolutely monster numbers. The aftermarket trading is up over 8%
| as of right now, roughly $41 USD to approximately $513 a share.
| Insane.
|
| Anyone who is a lot more versed in company valuation methodology
| see this as being near peak value, or does Nvidia have a lot more
| room to run?
| epolanski wrote:
| This incredible growth was already priced in at 250$.
|
| Now it's just crazy.
| squeaky-clean wrote:
| It's pretty overpriced already if you're looking at the
| fundamentals, and has been for a while. But fundamentals
| haven't really mattered in tech stocks for a long time.
|
| If you want the responsible advice, it's overpriced. If you
| want my personal advice, well I bought more yesterday
| afternoon.
| reilly3000 wrote:
| It's basically a meme stock now. I don't think anyone should be
| surprised by wide swings and irrational pricing going forward
| into the next few months.
| vsareto wrote:
| I don't think the market leader for graphics cards -- a
| technically complex product compared to a bunch of brick
| stores selling video games -- is what you can consider a meme
| stock
| pb7 wrote:
| What makes it a meme stock? It's printing money from an
| industry that is only starting. This isn't crypto nonsense.
| kelvie wrote:
| (Not sure if it's true), but a meme stock is one whose
| price is propped up by retail traders, and spreads through
| social media / word-of-mouth, as memes do.
|
| How we prove it's one is probably another matter.
| pb7 wrote:
| Retail investors make up a low single digit percent of
| individual stock ownership. /r/wallstreetbets is not
| putting even a dent in a $1T company's stock price.
| fnordpiglet wrote:
| Yeah, every company of any note is planning how to use AI,
| and a lot of the use cases are already proved out. This
| isn't speculative nonsense. The question is how big does it
| get, not will it be big.
|
| Crypto and blockchain never had an actual proved out use
| case. There was an interesting idea but no one ever could
| figure out a way it was useful. The costs associated were
| much higher than the risks of not using it.
|
| People who think this is a meme aren't paying attention,
| and they're certainly not in the rooms of power where AI
| planning is happening at megacorps. I've been in them, and
| it's serious and material and we are just now beginning to
| scratch the surface.
| danielmarkbruce wrote:
| Top line growing 100% a year, faster recently..... Doesn't
| take long for $50 bill pa to turn into 1 trillion pa at that
| rate...
| mikeweiss wrote:
| In my opinion it's likely mostly pull forward demand. Companies
| are racing to buy as many chips as possible and hoard them.
|
| I already saw a few posts here on HN from companies that threw
| down insane amounts of $$ on H100s and are now looking to rent
| out their excess capacity. I'm guessing we'll be seeing a lot
| more posts like that soon.
| mholm wrote:
| Nvidia is the pickaxe seller in a gold rush. Their valuation is
| very much tied to how big AI grows in the next several years,
| and how quickly competitors can arise. I could easily see them
| continuing to go up from here, especially if AI keeps on
| expanding utility instead of leveling off as some fear.
| tmn wrote:
| Valuation fundamentals don't justify current prices. That said
| it could easily go higher (much higher). Passive investing has
| created a constant bid that has significantly distorted price
| discovery compared to pre passive era.
| rvz wrote:
| > The aftermarket trading is up over 8% as of right now,
| roughly $41 USD to approximately $513 a share. Insane.
|
| 8% is close to nothing in stocks. Biotech stocks go up and down
| more than that without earnings announcements.
|
| > Anyone who is a lot more versed in company valuation
| methodology see this as being near peak value, or does Nvidia
| have a lot more room to run?
|
| As long as fine-tuning, training or even using these models are
| inefficient and no other efficient alternatives to that without
| these GPUs, then Nvidia will remain unchallenged unless that
| changes.
|
| EDIT: It is true like it or not AI bros. There are too many to
| list. For example, just yesterday:
|
| Fulcrum Therapeutics, Inc. (FULC) 38% up.
|
| China SXT Pharmaceuticals (CM:SXTC) down 25%.
|
| Regencell Bioscience Holdings (RGC) 28% up.
|
| NanoViricides (NNVC) up 20%.
|
| Armata Pharmaceuticals (ARMP) down 23%.
|
| [0] https://simplywall.st/stocks/us/pharmaceuticals-biotech
| pb7 wrote:
| Biotechs are lottery tickets, not stocks. You're just
| gambling on binary results.
| rvz wrote:
| > Biotechs are lottery tickets, not stocks.
|
| Please.
|
| Stocks are lottery tickets and Biotech stocks are stocks.
|
| > You're just gambling on binary results.
|
| The risks are no better than most of the AI bros buying
| Nvidia and overpriced stocks at the very top or all time
| highs or extremely risky 0DTE strategy trades on earnings
| announcements.
|
| Do AI bros who jumped in late really have to be married to
| their stocks that are already overpriced to make 8% on
| earnings when the very early folks start selling to take
| their profits?
| gorenb wrote:
| Stocks are lottery tickets...
| rvz wrote:
| Exactly.
|
| Nvidia is just one of many lottery tickets and 8% in one
| day is hardly volatile in stocks.
| pb7 wrote:
| Not if you understand what stocks are and how betting on
| biotech stocks is not a wise investment.
| rvz wrote:
| My point is, 8% on earnings is hardly volatile.
|
| > Not if you understand what stocks are and how betting
| on biotech stocks is not a wise investment.
|
| So you're giving investment advice for putting money in
| NVDA stock at the top or all time highs, right now on
| earnings as a 'wise investment' to make 8% (after hours)
| when others are clearly taking their money out of the
| market.
|
| Unless you already invested in NVDA stock last year, that
| move is gone and you're just telling retail late comers
| to throw money at NVDA at the top for others to take
| their profits.
| mikestew wrote:
| Whelp, I guess those September NVDA call options I sold are going
| to get exercised. Who woulda guessed after the crypto fallout
| that "AI" would come along and bump the price back up.
|
| Record revenues, and a dividend of $0.04 on a $450 stock? That's
| not even worth the paperwork. For example, if you bought 100
| shares, that's $45K. From that, around September $4 will show up
| in your account, which you have to pay taxes on. So $3 or so net
| on a $45,000 investment. Sure, there were stock buybacks, but why
| keep the token dividend around?
| kinghajj wrote:
| Should have sold a call credit spread instead!
|
| For large shareholders, the dividend would still be worthwhile.
| From what I could find, Jensen has 1.3 million shares, so he'd
| receive over $200k in dividends this year. You might think
| that's chump change, but another source lists his salary at
| just under $1m; another 20% bump in liquid income is nothing to
| sneeze at.
| mikestew wrote:
| _Should have sold a call credit spread instead!_
|
| I'll get right on that...after I go look up what that means.
| :-) I'm but a simple options trader who sells calls to unload
| stock I didn't want anymore anyway, and the premium is the
| icing on that cake. Left some money on the table this time,
| but I otherwise would have just sold the shares outright, and
| I did make some bank regardless.
|
| Gonna be missing that sweet, sweet $0.04 dividend, though.
| kinghajj wrote:
| A call credit spread simply means buying an even more out-
| of-the-money call along with the one you sold. It would
| have reduced the premium collected, but the long call would
| appreciate on sudden moves like today's.
| kikokikokiko wrote:
| Theta gang ftw. But I would advise you to stay away from
| NVDA, as soon as the first quarter with flat or decreasing
| revenues comes (and it WILL come), the fall would be one to
| tell your grandchildren about.
| loeg wrote:
| This benefit is basically only to large shareholders who
| can't sell stock. Which might be insiders like Jensen and...
| anyone else? Everyone else can just sell, like, 0.0001% of
| their stock or whatever.
| catchnear4321 wrote:
| many times what a lot of people make in a year is nothing to
| sneeze at.
|
| especially when it is awarded for merely having a stack of
| papers.
| Vvector wrote:
| The stock is up 9% or $45/share after hours. Jensen just made
| $58 million. $200k doesn't pay his dry cleaning bill.
| haldujai wrote:
| > Should have sold a call credit spread instead!
|
| Why?
|
| > From what I could find, Jensen has 1.3 million shares, so
| he'd receive over $200k in dividends this year. You might
| think that's chump change, but another source lists his
| salary at just under $1m; another 20% bump in liquid income
| is nothing to sneeze at.
|
| Jensen Huang is worth $42 billion and has been a billionaire
| for probably a decade or so now? Any CEO with that net worth
| would use stock-secured loans/LOCs for liquidity. 200k is
| very much chump change.
| thomas8787 wrote:
| Jensen is one of the largest shareholders. With over 80 million
| shares that's an over 3 million dollar dividend for him.
| epolanski wrote:
| Wait 80M shares? He's worth 4B $ then. Not bad.
| tyre wrote:
| just wait until it's $4bn and another $3m!
| pyrrhotech wrote:
| He's worth way more than that.
| https://www.bloomberg.com/billionaires/profiles/jenhsun-
| huan...
| _zoltan_ wrote:
| I sold 600C for this Friday an hour or so before earnings. Free
| money with 168% IV.
| HDThoreaun wrote:
| Up more than 10% after hours compared to close yesterday. I
| really thought NVDA had hit its ceiling at $1+ trillion,
| apparently not. Really does feel like a huge opportunity for
| Intel to me. They have the fab capacity to pump out at least
| reasonably competitive GPUs if they can figure out the software
| side of things.
|
| P/E still above 50 even after the AI craze 9x'd eps this quarter.
| Still hard for me to see that valuation ever makes sense but what
| do i know.
| UncleOxidant wrote:
| Intel doesn't seem to be able to execute. It's not just pumping
| out GPUs - for AI you need drivers, and the equivilent of CUDA
| and all the various libraries built on CUDA like cuDNN. They do
| have OneAPI but it hasn't caught on like CUDA in that space.
| It's kind of too bad since OneAPI is open and CUDA is not.
| highwaylights wrote:
| I can really see Intel figuring this out. A lot of people on
| HN talking about Intel as an also-ran just like they spoke
| about AMD before Zen.
|
| Raptor Lake is at 7nm and incredibly competitive there (~2700
| single core on geekbench, taken with a pinch of salt).
| They're still planning on being on 1.8nm/18A within 2 years,
| while at the same time ramping up their GPU efforts (albeit
| using TSMC for 4nm). Nvidia is very much in the lead, but
| this is just the beginning.
|
| tldr; I ain't hear no bell.
| andromeduck wrote:
| The problem with Intel is:
|
| 1. They don't pay - Nvidia/Google/Apple easily pays 1.5-2x
| Intel before appreciattion.
|
| 2. They're cheap/beaurcratic. The office sucks, your laptop
| sucks.
|
| 3. They suck at software.
| https://pharr.org/matt/blog/2018/04/18/ispc-origins
|
| 4. They can't develop/retain talent. Half the ML-HW/FW
| teams at AMD/Google/Nvidia/Apple are ex-Intel.
| HDThoreaun wrote:
| Right but the market is saying that a dominant GPU business
| is worth more than a trillion dollars. Just hard for me to
| believe that they can't get the business off the ground with
| that kind money on the table. Can't they just hire all of
| nvidia's developers and pay them 5x as much?
| UncleOxidant wrote:
| > Can't they just hire all of nvidia's developers and pay
| them 5x as much?
|
| Lol... Intel is famously stingy when it comes to salaries.
| i_have_an_idea wrote:
| You have no idea. There are a lot of senior engineers at
| NVDA making 7 figures annual total comp. How many are
| there at Intel?
| HDThoreaun wrote:
| for a trillion dollars though... eventually you have to
| believe Pat gets fired and replaced by someone who is
| 100% all in on GPUs if he can't figure this out
| orzig wrote:
| Maybe everything changes at $1 trillion, but I definitely
| see smaller (but public) companies leaving money on the
| table because it would require cultural change.
| UncleOxidant wrote:
| I'd argue that Intel being stingy with salaries is a big
| part of why they're so behind here. They just don't seem
| to be very serious about this. Intel has made several
| runs at the GPU market over the years and they just keep
| ending up where they are. And now NVDA has such a huge
| advantage (software and hardware) that it just gets
| harder and harder (and more expensive) to overcome.
|
| Probably Intel's best bet now would be to try to be the
| fab for NVDA.
| andrepd wrote:
| The market is also saying that Tesla is worth more than
| BMW, VW, Audi, Mercedes, Toyota, Hyundai, Fiat, Ford, and
| dozens of others _combined_. Mehh, I don 't know.
| UncleOxidant wrote:
| Exactly, the market isn't always rational. There's a lot
| of work on quantization in neural nets, for example, that
| can allow them to work sufficiently well on less capable
| hardware. It could be that some breakthrough there would
| obviate the need for NVDA hardware (or at least reduce
| how many are needed).
| lotsofpulp wrote:
| It is rational if you interpret market capitalization and
| share price movements as "the market is saying Tesla WILL
| be worth more than x,y z combined between time now and
| time whenever you want to sell it."
|
| For different people, the timespan between now and when
| they may want or need to sell it is different, and thus
| different people will arrive at different conclusions.
|
| And note that "worth more" above simply means growth in
| market capitalization, so as long as someone is willing
| to buy the shares at a price supporting that increased
| market cap, then it does not matter if Tesla is still
| selling fewer cars than the others combined.
| scrlk wrote:
| "In the short run, the market is a voting machine but in
| the long run, it is a weighing machine."
| TheAlchemist wrote:
| Exactly this !
|
| And as somebody with a significant short position, I
| would add - "The market can stay irrational longer than
| you can stay solvent" !
|
| Their numbers for this and next Q are absolutely amazing.
| It's also quite "refreshing" - a company with great
| product, almost without competition (so far - it will
| come real quick). And fun part being their main advantage
| is probably CUDA and not even the chips itself (which by
| the way they don't manufacture - they "only" do the
| design).
|
| But still - even with those numbers, and even with this
| pace of growth (both being absolutely not sustainable,
| and will probably reverse hard next year) - the valuation
| doesn't make any sense, especially given the current
| interest rates.
| peanuty1 wrote:
| And Rivian has a greater market cap than Nissan.
| nemothekid wrote:
| > _Can 't they just hire all of nvidia's developers and pay
| them 5x as much?_
|
| As time goes on I don't see how you break the CUDA moat
| even if you had all of nvidia Al's engineers.
|
| CUDA means you need everyone in AI to target your new
| (hopefully open) platform and that platform is faster than
| CUDA is. Given how most frameworks of the last 10 years
| have been optimized for CUDA you would need to turn around
| a global sized cruise ship.
|
| If Intel's GPUs are only 3% faster, will that be enough to
| rewrite my entire software stack for something not CUDA? If
| intel opts for a translation layer, could they ever match
| nvidia's performance?
| HDThoreaun wrote:
| Well I'm not super experienced with GPU development but
| aren't most people using packages built on top of CUDA
| like pytroch etc? Would it be impossible to throw tons of
| resources at those packages so they handle whatever intel
| comes up with as well as they handle CUDA?
|
| If Intel is 10% slower but 50% cheaper and the open
| source stack you use has been heavily updated to work
| well with Intel drivers would that not be an enticing
| product?
| UncleOxidant wrote:
| Intel's been trying this for several years now (OneAPI
| and OpenVINO), but so far they haven't gotten the
| traction. CUDA is just really entrenched at this point.
| michaelt wrote:
| _> aren 't most people using packages built on top of
| CUDA like pytorch etc?_
|
| Yes, and in fact both AMD and Intel have libraries. You
| can run Stable Diffusion and suchlike on AMD GPUs today,
| apparently. And you can export models from most ML
| frameworks to run in the browser, on phones and suchlike.
|
| _> If Intel is 10% slower but 50% cheaper [...] would
| that not be an enticing product?_
|
| Sometimes, yes. Some of the largest models apparently
| cost $600,000 in compute time to train [1], so halving
| that would be pretty appealing.
|
| However, part of the reason for nvidia's dominance is
| that if you're hiring an ML engineer for $160,000/year
| spending $1,600 to give them an RTX 4090 is chump change.
|
| [1]
| https://twitter.com/emostaque/status/1563870674111832066
| tgma wrote:
| - Can't they just hire all of nvidia's developers and pay
| them 5x as much?
|
| No.
| Mountain_Skies wrote:
| Over the past decade Intel seems to have become more
| interested in social causes than in technology, maybe with a
| side of government backrubbing to keep some income flowing.
| UncleOxidant wrote:
| Nah, the biggest problem is that Intel became very risk
| averse. Yeah, they'll talk a good game on taking risks, but
| when it comes down to it people who took risks that failed
| tend to not be at Intel and other employees see that and
| think that maybe they need to play it safe.
| johnvanommen wrote:
| > Yeah, they'll talk a good game on taking risks, but
| when it comes down to it people who took risks that
| failed tend to not be at Intel and other employees see
| that and think that maybe they need to play it safe.
|
| I worked at Sears corporate when Amazon was getting big,
| about 25 years ago.
|
| Always made me chuckle when armchair quarterbacks on TV
| would wonder why Sears couldn't do what Amazon did.
|
| Bezos took _tremendous_ risks in the late 90s and early
| 00s, while Sears was trying to figure out how to wring a
| few more pennies out of their stores. Sears Corporate was
| 110% focused on taking the existing business and
| maximizing profits, not on innovation of any kind
| whatsoever.
| marricks wrote:
| Why, and what does it mean, for Nvidia to announce fiscal results
| a year ahead of time.
|
| Is it just promise to sell chips in advance, so that's how far
| it's booked, do they own a Time Machine...?
| danielmarkbruce wrote:
| They announced Q2 results, ending July 31. Their fiscal year is
| a little unusual, it ends at end of Jan. So their 2024 year
| ends Jan 31 2024.
| scrlk wrote:
| Financial years are named by the calendar year that they end
| in, so FY24 is the financial year ending in 2024.
| lotsofpulp wrote:
| I have never seen it referred to as financial year until now,
| but I guess it makes sense too. Fiscal year is the typically
| used term.
| scrlk wrote:
| Looks like it depends on where you are in the world.
| "Financial year" appears to be the preferred phrase over in
| the UK.
| epolanski wrote:
| Every company I know of estimates future revenue.
|
| It's not black magic, they have contracts in place and know
| both how many GPUs will be produced and sold give or take few
| %s.
| rightbyte wrote:
| Seems like the shovel seller is on top of this AI thing?
| [deleted]
| solardev wrote:
| I miss the small graphics company that used to care about gamers
| :(
| fnordpiglet wrote:
| Well, they still make gamer cards. As a company with more than
| one employee they are able to multitask, and the knock on
| benefits of all the investment will improving their gaming
| products as well. I think there are a fair amount of dual use
| cards being sold - I know I've got a 4090 that I use for local
| AI stuff, and it renders RTX Witcher 3 like a beast.
| unpopularopp wrote:
| I actually have current gen GPUs from all 3 manufacturers
| through my job and I'm glad there are choices now but I'd still
| recommend Nvidia over AMD or Intel to anyone. Of course it
| depends on the budget, the games you play etc. but DLSS alone
| is such a difference that AMD still couldn't catch up with. I
| really hope Starfield will deliver because that will be the
| first game with FSR3.0 and introducing the technology, yet
| DLSS3.5 was just revealed yesterday. It's a huge gamble for
| sure going all in on Starfield but tbh that's one of the hypest
| game of the year so worth it. And Intel is nowhere near that
| (apart from the price and getting 16GB for cheap)
| FirmwareBurner wrote:
| Intel has entered the chat. If you wanna game on abudget with
| lots of VRAM go for A750 or A770.
| bozhark wrote:
| Intel has left the chat.
| FirmwareBurner wrote:
| Intel is very much in the game. Every of their recent big
| driver update ads double digit performance boosts on AAA
| titles.
| helf wrote:
| [dead]
| beebeepka wrote:
| When was that? Surely it must have been at least a decade
| before the GTX 970 "4GB" but maybe after all the driver
| cheating in the late 90s and early 2000s.
|
| I no longer buy nvidia hardware but I do enjoy stock price
| getting higher. I just wish I had the sense to buy more, a lot
| more, stock when it was much cheaper. How does a chicken shit
| like me make big money :(
| grouchomarx wrote:
| It's a tough game. Gotta have the guts to get in and stay in
| wmf wrote:
| The 970 was amazing for gaming; the 3.5GB problem was just
| for CUDA.
| TechnicolorByte wrote:
| Nvidia is dragging the entire gaming industry forward with
| ray/path tracing and AI-based resolution and frame scaling.
| Everyone else (I.e., AMD) is following Nvidia's lead.
|
| In what way has Nvidia "forgotten" gamers with the rise of
| their datacenter business?
| wudangmonk wrote:
| Raytracing isn't a thing no matter how much nvidia wants to
| push it. The performance penalty is too big for what amounts
| to something that takes a trained eye to notice. AI-
| resolution scaling is nice to have on lower end devices but
| the max resolution people actually use is 4k and I can only
| think of VR where having more than 4k would be nice to have.
|
| My main gripe is that at 4k resolution, top of the line GPUs
| shouldn't be using AI frame scaling to get decent fps unless
| you are taking the raytracing penalty for funsies.
| TechnicolorByte wrote:
| Feels like this comment is stuck in 2019 or something. Have
| you seen DLSS3.5 announced yesterday with ray
| reconstruction? Have you seen path tracing in CP2077?
|
| Seems like you're really dismissing the massive speed ups
| these past few years. Agreed that ray tracing in games is
| only at the beginning. A lot of that is gated by the
| consoles/AMD but that's generally how it goes. Would love
| to see Nvidia in one of the powerful consoles to accelerate
| adoption of these technologies.
| tracerbulletx wrote:
| The card prices have gone up pretty significantly and
| availability has been bad for the last few years, they also
| have been segmenting their product line in ways where some of
| the lower tier cards are not very compelling vs previous
| release cycles. I don't know if that's attributable to them
| "forgetting about gamers" but it's what people are upset
| about.
| cma wrote:
| Compare price performance and it isn't so bad, assuming you
| add in an adjustment for AMD's lack of features.
| rvz wrote:
| So even without crypto, the prices of GPUs are still
| expensive regardless.
|
| The hoarding isn't going to stop unless there are either
| efficient alternatives that are competitive on performance
| and price.
|
| Perhaps that is why I keep seeing gamers crying over GPU
| prices and unable to find cheap Nvidia cards due to the AI
| bros hoarding them for their 'deep learning' pet projects.
|
| So they settle with AMD instead.
| epolanski wrote:
| Raw performance isn't increasing much, price/performance
| under the 700$ has barely increased both now and in 2000
| series.
| cma wrote:
| It's a combination of algorithms and hardware, but
| raytracing has gone from path traced quake 1 to path traced
| Cyberpunk 2077 in just a few years. The raytracing side of
| things hardware wise has doubled in perf for the same tier
| card each generation.
| solardev wrote:
| Gamers don't have datacenter budgets
| ancientworldnow wrote:
| Gamers like to ignore inflation and increasing fab costs
| and pretend cards should cost the same forever with double
| performance gains every 1.5 years.
| Mountain_Skies wrote:
| For much of tech hardware world, declining costs and
| increasing performance have been the general trends for
| as long as most of us have been alive.
| sgarman wrote:
| Off the top of my head the only thing I can think of that
| did that was TVs.
| [deleted]
| samspenc wrote:
| I think both this comment and GP comment are true in their
| own ways. Nvidia is still pushing the gaming / 3D industry
| faster than its competitors and I would still recommend an
| Nvidia card for reliability and performance over others.
|
| BUT that comes at a price - Nvidia consumer chips are also
| notoriously expensive, but if you want best-of-breed for
| gaming, it does come at a price.
|
| I am hoping that AMD and Intel will be able to compete with
| Nvidia someday but I'm not holding my breath.
| anjel wrote:
| I've seen a regular stream of reports on HN about people "sort
| of" getting AI done on laptops and non and lowly GPU machines. Is
| it unreasonable or far-fetched to imagine that someone figures
| out how to efficiently get it all done without GPUs and pull the
| rug out from under Nvidia?
| bob1029 wrote:
| I have an options strategy that is riding on this possibility
| right now.
|
| All you have to do is take 5 seconds in a typical code base to
| determine that the way we write software today isn't exactly...
| ideal. Given another 6-12 months, I cannot comprehend another
| ~OOM not being extracted somewhere simply by making the
| software better.
| SpacePortKnight wrote:
| Just like existence of MariaDB does not prevent Snowflake from
| being worth $50B, just being good enough on laptop is not
| enough to replace the need for the cutting edge.
| lsh123 wrote:
| Training is very expensive and requires GPUs. What you read
| about is running trained model on consumer devices (even
| phones!).
| FreshStart wrote:
| Let's assume for a moment you could sort of trade parallel
| computation for vast space, fast search and retrieval. So in
| this hypotheticals computational theory, you could build a
| lookup machine from CPUs and ssds.. squeezing the parallel
| cores into one CPU, by squeezing the shaders running into a
| million hashes.. And before you know it your simulating a micro
| verse trying desperately to find out how to avoid climate
| change. What if God hates recursion?
| nabla9 wrote:
| Their revenues are seriously supply restricted. ~2x revenue if
| chip manufacturing could keep up with demand. Packaging seems to
| be the bottleneck just now.
| [deleted]
| nathias wrote:
| selling shovels for a few different gold rushes seems to be
| profitable
| parhamn wrote:
| A tangent to this I've been thinking about quite a bit is how big
| a moat drivers are to the software/hardware ecosystem.
|
| They're a major moat/hurdle (depending on your perspective) for
| operating systems, new hardware platforms, graphics cards, custom
| chips, and more.
|
| It's interesting to think that we're not _that_ far from being
| able to generate decent drivers for things on the fly with the
| latest code gen advancements. Relevant to this, that could reduce
| the monopolies here, but perhaps as interesting is we can have
| more new complete OSes with more resources allocated to the user
| experience vs hardware compatibility.
___________________________________________________________________
(page generated 2023-08-23 23:00 UTC)