_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
HTML Visit Hacker News on the Web
COMMENT PAGE FOR:
HTML Getting bitten by Intel's poor naming schemes
decryption wrote 2 hours 24 min ago:
I have been stung by the exact same socket name bullshit from Intel and
it is infuriating.
m463 wrote 2 hours 56 min ago:
I wonder if the AMD am4 and am5 stuff has issues like this?
mrandish wrote 4 hours 20 min ago:
Because I don't follow CPUs constantly and only check in from time to
time, all the code names (for cores, CPUs and platforms), generations,
marketing names, model numbers, etc make it hopelessly confusing. And
it's not just Intel but AMD and other companies have been doing this
chronically for >10 years. It seems almost like intentional obfuscation
yet I can't really think of a long-term reason that creating confusion
systemically is in the company's interest. Sure, every company
occasionally has a certain generation they might like to forget but
that's too unpredictable to be the motivation behind such a consistent
long-term pattern.
So I suspect maybe it's just a perverse effect of successive
generations of marketing and product managers each coming up with a new
system "to fix the confusion?" What's strange is that there's enough
history here that smart people should be able to recognize there's a
chronic problem and address it. For example, relatively simple patterns
like Era Name (like "Core"), Generation Number, Model Number - Speed
and then a two digit sub-signifier for all the technical variants. Just
two digits of upper case letters and digits 1-9 is enough to encode
>1200 sub-variants within each Era/Gen/Model/Speed.
The maddening part is that they not only change the classifiers, they
also sometimes change the number and/or hierarchy of classifiers, which
eliminates any hope of simply mapping the old taxonomy to the new.
kccqzy wrote 3 hours 7 min ago:
Tech journalism should help. Itâs basically curation. I also
donât follow CPUs constantly. When I need to buy CPUs, I go to a
few publications (say Ars Technica), search their archives for
discussions of CPUs published within the last two years and see what
the editors think.
Of course itâs only a solution if you are buying. If you writing
low-level software for these outside userspace, I suppose youâll
have to follow the development of CPUs.
baden1927 wrote 6 hours 46 min ago:
Cross-socket E7-8890 v4/Socket LGA2011-1 GPU/CPU extensions for
Blackwell 100.
D13Fd wrote 9 hours 8 min ago:
I agree their name scheme sucks. But the way to buy a new CPU is to
check with the motherboard vendor about what CPUs the motherboard
supports. You can't expect it to work (although it may) if the
motherboard maker doesn't list it as supported.
Having some portion of the socket name stay the same can still be
helpful to show that the same heatsinks are supported. I agree there
are many far better ways Intel could handle this.
yonatan8070 wrote 10 hours 1 min ago:
Since everyone is complaining about the naming schemes of CPUs, I'll
pitch in.
An Intel Core Ultra 7 155U and a Core Ultra 7 155H, are very different
classes of CPUs!
If you're comparing laptops, you'll see both listed, and laptops with
the U variant will be significantly cheaper, because you get half the
max TDP, 4 fewer cores, 8 fewer threads, and a worse GPU.
This isn't to say the 155U is a bad chip, it's just a low-power
optimized chip, while the 155H is a high-performance chip, and the
difference between their performance characteristics is a lot larger
than you'd expect when looking at the model numbers. Heck, if you
didn't know better, you might text your tech-savvy friend "hey is a 155
good?", and looking that up would bring up the powerful H version.
the_pwner224 wrote 9 hours 54 min ago:
And the 285H is lower performance than a 275HX.
Their laptop naming scheme at least is fairly straightforward once
you figure it out.
U = Low-TDP, for thin & light devices
H = For higher-performance laptops, e.g. Dell XPS or midrange gaming
laptops
HX = Basically the desktop parts stuffed into a laptop form factor,
best perf but atrocious power usage even at idle. Only for gaming
laptops that aren't meant to be used away from a desk.
And within each series, bigger number is better (or at least not
worse - 275HX and 285HX are practically identical).
E39M5S62 wrote 5 hours 15 min ago:
Don't forget the V series in there. I have an Intel(R) Core(TM)
Ultra 7 258V in my Thinkpad. I think they're still being made. I
bought an open box Thinkpad T14s Gen 6 with it - they come with a
nicer GPU than the Ultra 7 255U.
wtallis wrote 1 hour 24 min ago:
The V series is a one-off thing Intel did, but they don't have a
direct successor planned.
Previously, they had a P series of mobile parts in between the U
and H series (Alder Lake and Raptor Lake). Before that, they had
a different naming scheme for the U series equivalents (Ice Lake
and Tiger Lake). Before that, they had a Y series for even lower
power than U series.
So they mix up their branding and segmentation strategy to some
extent with almost every generation, but the broad strokes of
their segmentation have been reasonably consistent over the past
decade.
E39M5S62 wrote 19 min ago:
Very interesting. I was a bit out of the loop on Intel mobile
CPUs; I looked up the benchmark specs for it when purchasing
and saw that it generally trounces the 255U.
I've been really quite happy with it - most of the time the CPU
runs at about 30 deg C, so the fan is entirely off. General
workloads (KDE, Vivaldi, Thunderbird, Konsole) puts it at about
5.5 watts of power draw.
vladde wrote 10 hours 39 min ago:
at least they are not renaming retroactively.
looking at you USB 3.0 (or USB 3.1 Gen 1 (or USB 3.2 Gen 1))
Corrado wrote 10 hours 32 min ago:
AWS just renamed their Security Hub service to Security Hub CSPM and
then created a new service named Security Hub that is related but
completely different than the original service.
kjs3 wrote 9 hours 33 min ago:
And there's AWS S3, and there's AWS Glacier. And there's AWS S3,
Glacier storage tier, which isn't Glacier. Which is OK, because
Glacier is going away, and you should use S3, Glacier tier. Unless
you're already using it, in which case you can still use it. So
you still have to know Glacier and Glacier, while both storing your
data, aren't technically the same thing.
But if you think that's bad, you haven't seen the name change
shenanigans Microsoft pulls in Azure.
pkphilip wrote 11 hours 25 min ago:
Intel and AMD naming schemes are extremely confusing these days. I can
understand that naming these things must be really complicated these
days since we have different core counts, thread counts, different
types of cores, different clock speeds etc, but still
PaulHoule wrote 12 hours 14 min ago:
They have to make âshit creekâ to put a end to all those water
bodies.
Suggger wrote 13 hours 33 min ago:
It's fascinating how 'Naming Schemes' are supposed to clarify hierarchy
but end up creating more chaos. When the signifier (FCLGA2011) detaches
from the signified (physical compatibility), the system is officially
broken. Feels like a hardware version of a bureaucratic loop.
shortercode wrote 14 hours 1 min ago:
I recall standing in CEX one day perusing the cabinet of random
electronics ( as you do ) and wondering why the Intel CPUs were so
cheap compared to the AMD ones. I eventually concluded that the cross
generation compatibility of zen cpus meant they had a better resale
value. Whereas if you experienced the more common mobo failure with an
Intel chip you were likely looking at replacing both.
Yizahi wrote 14 hours 26 min ago:
Yeah, Intel has some crazies in the naming department since they
abandoned Netburst with clear generation number and frequency in the
name. I remember having two CPUs with exact same name E6300 for the
exact same socket LGA775, but the difference was 1 GHz and cache size.
Like, ok, I can understand that they were close enough, but at least
add something to the model number to distinguish them.
kosolam wrote 15 hours 9 min ago:
Wow $15 for that CPU sounds great.
mlsu wrote 5 hours 37 min ago:
Until you see your electricity bill.
cyral wrote 6 hours 52 min ago:
Wild that it was released in 2016 for almost $9,000
titaniumtown wrote 14 hours 28 min ago:
Yea, old server hardware can be super cheap! In my opinion though,
the core counts are misleading. Those 24 cores are not compareable to
the cores of today. Plus IPC+power usage are wildly different. YMMV
on if those tradeoffs are worth it.
MadameMinty wrote 15 hours 17 min ago:
That reminds me when I got a server-grade Xeon E5472 (LGA771) and after
some very minor tinkering (knife, sticker mod) fit it into a cheap
consumer-grade LGA775 socket. Same microarchitecture, power delivery
class, all that.
LGA2011-0 and LGA2011-1 are very unalike, from the memory controller to
vast pin rearrangement.
So not only they call two different sockets almost the same per the
post, but they also call essentially the same sockets differently to
artificially segment the market.
nnevatie wrote 15 hours 35 min ago:
Do you think Intel names things poorly?
NVidia has these, very different GPUs:
Quadro 6000,
Quadro RTX 6000,
RTX A6000,
RTX 6000 Ada,
RTX 6000 Workstation Edition,
RTX 6000 Max-Q Workstation Edition,
RTX 6000 Server Edition
PunchyHamster wrote 15 hours 20 min ago:
less worse.
It would be like having Quadro 6000 and 6050 be completely different
generation
masklinn wrote 12 hours 30 min ago:
The GeForce 700 series came in 3 different microarchitectures. Most
were on Kepler but there were several fermi (the previous uarch)
and a few mobile chips used maxwell (the following architecture).
Lest anyone think AMD is any better the Radeon 200 series came in
everything from terascale 2 (4 years old at that point) to GCN3.
The gpu manufacturers have also engaged in incredible amounts of
rebadging to pad their ranges, some cores first released on the
GeForce 8000 series got rebadged all the way until the 300 series.
justsomehnguy wrote 5 hours 25 min ago:
MX440, my beloved.
Somewhat surprisingly it sometimes had a better performance than
Radeon 9200 precisely because it lacked pixel shaders and yet had
a good enough perf.
black3r wrote 13 hours 19 min ago:
There are GPUs from 3 different generations in that list... Quadro
6000 is an old Fermi from 2010, Quadro RTX6000 is Turing from 2018,
RTX6000 Ada is Ada from 2022...
Oh and there's also RTX PRO 6000 Blackwell which is Blackwell from
2025...
jerf wrote 5 hours 52 min ago:
I gave up understanding GPU names a long time ago. Now I just
hope the efficient market hypothesis is at least moderately
effective and as long as I buy from a reputable retailer the
price is at least mostly reflective of performance.
They've hyperoptimized all these marketing buzzwords to the point
that I'm basically forced into the moral equivalent of buying GPU
by the pound because I have no idea what these marketers are
trying to tell me anymore. The only stat I really pay attention
to is VRAM size.
(If you are one of those marketers, this really ought to give you
something to think about. Unless obfuscation is the goal, which I
definitely can not exclude based on your actions.)
g947o wrote 13 hours 55 min ago:
Ah, I see who you are insinuating
valexiev wrote 15 hours 38 min ago:
Sounds like a great candidate for a Cybersecurity Knowledge Graph.
deathanatos wrote 16 hours 7 min ago:
This reminds me of my ASRock motherboard, though this was over a decade
ago now. The actual board was one piece of hardware, but the manual it
shipped with was for a different piece of hardware. Very similar, but
not identical (and worse, not identical where I needed them to be,
which, naturally, is both the only reason I noticed and how these
things get noticedâ¦), but yet both manual and motherboard had the
same model number. ASRock themselves appeared utterly unaware that they
had two separate models wandering around bearing the same name, even
after it was pointed out to them.
The next motherboard (should RAM ever cease being the tulip du jour)
will not be an ASRock, for that and other reasons.
For the love of everything though, just increment the model number.
kwanbix wrote 17 hours 8 min ago:
I don't know why, but most tech companies are horrible at naming
products.
sph wrote 5 hours 52 min ago:
It's because naming products is done by the marketing dept. Sometimes
they decide to increase a "major" version number for a product that
is a rehash of a previous line just to confuse people and sell more
units.
People believe "bigger number" = better, and marketing teams exploit
that.
thisislife2 wrote 12 hours 46 min ago:
At least with CPUs, I believe the the retail product names are
deliberately confusing by design so that you as a consumer get
confused (and mislead) into buying older models, whose sales tend to
stagnate when newer models are released. (Newer models are of course,
obscenely priced to differentiate them). A somewhat aware tech
consumer what like to buy the latest affordable model they can. But
if you can't easily identify the latest model or the next best one
after it, they will often end up purchasing some older model with
similar name.
knorker wrote 15 hours 31 min ago:
This is too forgiving of intel in this case. It has a name. They just
don't use it. "Sockets Supported: FCLGA2011". It's not like this is
poorly named. It's not even true.
agos wrote 15 hours 43 min ago:
you know, there are two hard problems in computer science...
mcny wrote 15 hours 5 min ago:
For today's lucky ten thousand, the joke is that
> There are only two hard things in Computer Science: cache
invalidation, naming things, off-by-one errors.
tmtvl wrote 11 hours 57 min ago:
I thought there were 3 difficult problems: naming things, cache
invalidation, , and off by one errors. concurrency
chamomeal wrote 10 hours 17 min ago:
the concurrency twist got a laugh out of me, I've seen this
joke a zillion times but never the concurrency bit
latexr wrote 13 hours 27 min ago:
You explained one thing but introduced another needing
explanation.
HTML [1]: https://xkcd.com/1053/
baklazan wrote 15 hours 3 min ago:
Why do people say that, when the number one hardest problem is
making good abstractions?
latexr wrote 13 hours 28 min ago:
Because itâs a âfamousâ (in our circles) quote. You might
prefer this one:
> Thereâs two hard problems in computer science: We only have
one joke and it's not funny.
myrmidon wrote 13 hours 4 min ago:
There are at least one more joke:
"There is 10 kinds of people, those who can read binary and
those who can't."
Personally I prefer the cache invalidation one.
latexr wrote 11 hours 25 min ago:
> "There is 10 kinds of people, those who can read binary
and those who can't."
I like the continuation (which requires knowledge of the
original): âAnd those who didnât expect this joke to be
in base 3â.
ncruces wrote 14 hours 52 min ago:
Names abstract things.
monster_truck wrote 17 hours 23 min ago:
LGA2011 was an especially cursed era of processors and motherboards.
In addition to all of the slightly different sockets there was ddr3,
ddr3 low voltage, the server/ecc counterparts, and then ddr4 came out
but it was so expensive (almost more expensive than 4/5 is now compared
to what it should be) that there were goofy boards that had DDR3 & DDR4
slots.
By the way it is _never_ worth attempting to use or upgrade anything
from this era. Throw it in the fucking dumpster (at the e waste
recycling center). The onboard sata controllers are rife with data
corruption bugs and the caps from around then have a terrible
reputation. Anything that has made it this long without popping is most
likely to have done so from sitting around powered off. They will also
silently drop PCI-E lanes even at standard BCLK under certain
utilization patterns that cause too much of a vdrop.
This is part of why Intel went damn-near scorched earth on the
motherboard partners that released boards which broke the contractual
agreement and allowed you to increase the multipliers on non-K
processors. The lack of validation under these conditions contributed
to the aformentioned issues.
ryukoposting wrote 10 hours 52 min ago:
Oh don't worry, the horrors are returning!
HTML [1]: https://www.techpowerup.com/343672/asrock-h610m-combo-mother...
lachiflippi wrote 16 hours 36 min ago:
>and allowed you to increase the multipliers on non-K processors
Wasn't this the other way around, allowing you to increase
multipliers on K processors on the lower end chipsets? Or was both
possible at some point? I remember getting baited into buying an H87
board that could overclock a 4670K until a bios update removed the
functionality completely.
kasabali wrote 4 hours 49 min ago:
Should be so, multiplier is locked at cpu level not firmware.
7bees wrote 17 hours 37 min ago:
It has pretty much always been the case that you need to make sure the
motherboard supports the specific chip you want to use, and that you
can't rely on just the physical socket as an indicator of compatibility
(true for AMD as well). For motherboards sold at retail the
manufacturer's site will normally have a list, and they may provide
some BIOS updates over time that extend compatibility to newer chips.
OEM stuff like this can be more of a crapshoot.
All things considered I actually kind of respect the relatively
straightforward naming of this and several of Intel's other sockets.
LGA to indicate it's land grid array (CPU has flat "lands" on it, pins
are on the motherboard), 2011 because it has 2011 pins. FC because
it's flip chip packaging.
tristor wrote 7 hours 55 min ago:
> For motherboards sold at retail the manufacturer's site will
normally have a list, and they may provide some BIOS updates over
time that extend compatibility to newer chips.
Ah, but if you want to buy a newly released CPU and the board does
support/work with it, but nobody has updated the documentation on the
website: How do you know?
Ultimately it's always a crapshoot. Some manufacturers don't even
provide release notes with their BIOS updates...
Back in the day, this is what forums were for. Unfortunately forums
are dead, Facebook is useless, and Google search sucks now. So you
should just buy it, if it doesn't work ask for a refund and if they
refuse just do a chargeback.
duskwuff wrote 16 hours 39 min ago:
> All things considered I actually kind of respect the relatively
straightforward naming of this and several of Intel's other sockets.
That's an industry-wide standard across all IC manufacturing - Intel
doesn't really get to take credit for it.
tomcam wrote 17 hours 58 min ago:
How dare they accuse Intel of any kind of naming scheme at all.
Everyone whoâs anyone knows itâs an act of stochastic terrorism.
ocdtrekkie wrote 18 hours 27 min ago:
In fairness, the author should've known something was up when they
thought they could put a multiple year newer chip in an Intel board.
That sort of cross-generational compatibility may exist in AMD land but
never in Intel.
justinclift wrote 14 hours 5 min ago:
The author would likely be able to put a v3 generation processor in
the motherboard, they just didn't do the necessary research to find
that out before pulling the trigger.
userbinator wrote 16 hours 31 min ago:
It sounds like you've never heard of Socket 370 or Slot 1.
justsomehnguy wrote 13 hours 9 min ago:
It sound like you've successfully inserted Tualeron into BP6 and it
worked out of the box.
justsomehnguy wrote 2 hours 25 min ago:
me: looks at the updoots
me: mueheheh
mort96 wrote 17 hours 43 min ago:
I mean sure, that would seem suspicious. But not suspicious enough
that I'd likely have caught the problem. It's not that far fetched
that Intel may occasionally make new CPUs for older sockets, and when
Intel's documentation for the motherboard says "uses socket
FCLGA2011" and Intel's documentation for the CPU says "uses socket
FCLGA2011", I too would have assumed that they use the same socket.
bjackman wrote 18 hours 35 min ago:
I work in CPU security and it's the same with microarchitecture. You
wanna know if a machine is vulnerable to a certain issue?
- The technical experts (including Intel engineers) will say something
like "it affects Blizzard Creek and Windy Bluff models'
- Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63
then the CPU is affected". (There is no database for this you can only
find it out by actually booting one up).
- The spec sheet for the hardware calls it a "Xeon Osmiridium
X36667-IA"
Absolutely none of these forms of naming have any way to correlate
between them. They also have different names for the same shit
depending on whether it's a consumer or server chip.
Meanwhile, AMD's part numbers contain a digit that increments with each
year but is off-by-one with regard to the "Zen" brand version.
Usually I just ask the LLM and accept that it's wrong 20% of the time.
hedgehog wrote 8 hours 39 min ago:
Oh, the Xeons are with the vX vs vY nonsense, where the same number
but a different version is an entirely different CPU (like the 2620
v1 and v2 are different microarchitecture generations and core
counts). But, not to leave AMD out, they do things like the Ryzen
7000 series which are Zen 4 except for the models that are Zen 2 (!).
(yes if you read the middle digits there's some indication but that's
not that helpful for normal customers).
duxup wrote 9 hours 20 min ago:
That's been the case with hardware at several companies I was at.
I was convinced that the process was encouraged by folks who used it
as a sort of weird gatekeeping by folks who only used the magic code
names.
Even better I worked at a place where they swapped code names between
two products at one time... it wasn't without any reason, but it mean
that a lot of product documentation suddenly conflicted.
I eventually only refereed to exact part numbers and model numbers
and refused to play the code name game. This turned into an amusing
situation where some managers who only used code names were suddenly
silent as they clearly didn't know the product / part to code name
convention.
mastax wrote 9 hours 55 min ago:
Also technically the code names are only for unreleased products so
on ark itâll say âproducts formerly Ice Lakeâ but the intel
will continue to calm them Ice Lake.
wyldfire wrote 9 hours 59 min ago:
> Absolutely none of these forms of naming have any way to correlate
between them.
I've found that -- as of a ~decade ago, at least, ark.intel.com had a
really good way to cross-reference among codenames / SKUs / part
numbers / feature set/specs. I've never seen errata there but they
might be. Also, I haven't used it in a long time so it could've
gotten worse.
bjackman wrote 8 hours 30 min ago:
Intel do have a website where you can look up SKUs. If you wait
long enough and exploit certain bugs in the JS you can get it to
give you a bunch of CSV files.
Now the only issue you have is that there is no consistent schema
between those files so it's not really any use.
balou23 wrote 12 hours 0 min ago:
I hear you.
Coincidentally, if anyone knows how to figure out which Intel CPUs
actually support 5-level paging / the CPUID flag known as la57,
please tell me.
TiredOfLife wrote 12 hours 22 min ago:
> Meanwhile, AMD's part numbers contain a digit that increments with
each year but is off-by-one with regard to the "Zen" brand version.
Under [1] Ryzen 7000 series you could get zen2, zen3, zen3+, zen4
HTML [1]: https://en.wikipedia.org/wiki/Ryzen#Mobile_6
zrm wrote 14 hours 18 min ago:
These have been my go-to for a while now: [1] [2] It doesn't have the
CPUID but it's a pretty good mapping of model numbers to code names
and on top of that has the rest of the specs.
HTML [1]: https://en.wikipedia.org/wiki/List_of_Intel_Core_processors
HTML [2]: https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors
greggsy wrote 14 hours 22 min ago:
Do you just have banks of old CPUs from every generation to test
against?
bjackman wrote 8 hours 24 min ago:
Nope. Recently I had to use my company card to buy an ancient
mini-PC from eBay just so I could get access to a certain Skylake
model
numpad0 wrote 14 hours 49 min ago:
- sSpec S0ABC = "Blizzard Creek" Xeon type 8
version 5 grade 6 getConfig(HT=off, NX=off, ECC=on, VT-x=off,
VT-d=on)=4X Stepping B0
- "Blizzard Creek" Xeon type 8 -> V3 of Socket FCBGA12345 ->
chipset "Pleiades Mounds"
- CPUID leaf 0x3aa = Model specific feature set checks
for "Blizzard Creek" and "Windy Bluff(aka Blizzard Creek V2)"
- asserts bit 63 = that buggy VT-d circuit is not
off
- "Xeon Osmiridium X36667-IA" = marketing name to confuse
specifically you(but also IA-36-667 = (S0ABC|S9DFG|S9QWE|QA45P))
disclaimer: above is all made up and I don't work at any of relevant
companies
automatic6131 wrote 15 hours 3 min ago:
> AMD's part numbers contain a digit that increments with each year
Aha, but which digit? Sure, that's easy for server, HEDT and desktop
(it's the first one) but if you look at their line of laptop chips
then it all breaks down.
pezezin wrote 11 hours 39 min ago:
Lol no, for servers (Epyc) it is the last digit. Why? Who knows, to
make it more confusing I guess.
adamsb6 wrote 7 hours 52 min ago:
Well yeah Epyc is little endian
zamadatix wrote 10 hours 9 min ago:
I admit it took me until the 4th gen Epyc to realize this. I
laughed out loud at myself/the numbering scheme.
ErroneousBosh wrote 15 hours 19 min ago:
I feel like it's a cultural thing with the designers. Ceragon were
the exact same when I used to do microwave links. Happy to provide
demo kit, happy to provide sales support, happy to actually come up
and go through their product range.
But if you want any deep and complex technical info out of them, like
oh maybe how to configure it to fit UK/EU regulatory domain RF rules?
Haha no chance.
We ended up hiring a guy fluent in Hebrew just to talk to their
support guys.
Super nice kit, but I guess no-one was prepared to pay for an
interface layer between the developers and the outside world.
countWSS wrote 15 hours 58 min ago:
I've also found the same thing a decade ago,
apparently lots of features(e.g. specific instruction, igpu)
are broadly advertised as belonging to specific arch,
but pentium/celeron(or for premium stuff non-xeon) models often lack
them entirely and
the only way to detect is lscpu/feature bits/digging in UEFI
settings.
7bit wrote 16 hours 9 min ago:
I have three Ubuntu servers and the naming pisses me off so much. Why
can't they just stick with their YY.MM. naming scheme everywhere.
Instead, they mostly use code names and I never know what codename I
am currently using and what is the latest code name. When I have to
upgrade or find a specific Python ppa for whatever OS I am running, I
need to research 30 minutes to correlate all these dumb codenames to
the actual version numbers.
Same with Intel.
STOP USING CODENAMES. USE NUMBERS!
Saris wrote 8 hours 53 min ago:
Same problem I have with Debian.
At least Fedora just uses a version number!
marcosdumay wrote 5 hours 57 min ago:
Debian is trying hard to switch to numbers. It's the user base
that is resisting the change.
Maybe they should stop synlinking the new versions after 14,
because AFAIK, they already tried everything else.
Saris wrote 5 hours 12 min ago:
Yeah if they just stopped using a release name that'd probably
do it, although communities can be surprisingly stubborn on
some things.
kevin_thibedeau wrote 7 hours 27 min ago:
I like to think that Buster, Bullseye, and Bookworm was a ploy to
make people more dependent on the version number.
Saris wrote 7 hours 15 min ago:
I work with Debian daily and I still couldn't tell you what
order those go in. but Debian 12, Debian 13, etc.. is perfectly
easy to remember and search for.
throwaway173738 wrote 11 hours 21 min ago:
Try cat /etc/os-release. The codenames are probably there. I know
they are for Debian.
ramses0 wrote 10 hours 24 min ago:
Thank you! I was just about to kvetch about how difficult it was
to map (eg) "Trixie" == "13" because /etc/debian_version didn't
have it... I always ended up having to search the internet for it
which seemed especially dumb for Debian!
daedric7 wrote 14 hours 17 min ago:
They can't. They used to, until they tried to patent 586...
Meneth wrote 14 hours 12 min ago:
Trademark.
taneliv wrote 14 hours 20 min ago:
Protip, if you have access to the computer: `lsb_release -a` should
list both release and codename. This command is not specific to
Ubuntu.
Finding the latest release and codename is indeed a research task.
I use Wikipedia[1] for that, but I feel like this should be more
readily available from the system itself. Perhaps it is, and I just
don't know how?
HTML [1]: https://en.wikipedia.org/wiki/Ubuntu#Releases
yjftsjthsd-h wrote 11 hours 26 min ago:
> Protip, if you have access to the computer: `lsb_release -a`
should list both release and codename. This command is not
specific to Ubuntu.
I typically prefer
cat /etc/os-release
which seems to be a little more portable / likely to work out of
the box on many distros.
cesarb wrote 10 hours 42 min ago:
That's only if the distro is recent enough; sooner or later,
you'll encounter a box running a distro version from before
/etc/os-release became the standard, and you'll have to look
for the older distro-specific files like /etc/debian_version.
Denvercoder9 wrote 7 hours 21 min ago:
> you'll encounter a box running a distro version from before
/etc/os-release became the standard
Do those boxes really still exist? Debian, which isn't really
known to be the pinacle of bleeding edge, has had
/etc/os-release since Debian 7, released in May 2013. RHEL 7,
the oldest Red Hat still in extended support, also has it.
yjftsjthsd-h wrote 3 hours 59 min ago:
> the oldest Red Hat still in extended support, also has
it.
You would be alarmed to know how long the long tail is. Are
you going to run into many pre-RHEL 7 boxes? No. Depending
on where you are in the industry, are you likely to run
into some ancient RHEL boxes, perhaps even actual Red Hat
(not Enterprise) Linux? Yeah, it happens.
skeletal88 wrote 15 hours 59 min ago:
Yes, I agree, codenames are stupid, they are not funny or clever.
I want a version number that I can compare to other versions, to be
able to easily see which one is newer or older, to know what I can
or should install.
I don't want to figure out and remember your product's clever
nicknames.
kalleboo wrote 16 hours 5 min ago:
As an Apple user, the macOS code names stopped being cute once they
ran out of felines, and now I can't remember which of Sonoma or
Sequoia was first.
Android have done this right: when they used codenames they did
them in alphabetical order, and at version 10 they just stopped
being clever and went to numbers.
black3r wrote 13 hours 30 min ago:
Ubuntu has alphabetical order too, but that's only useful if you
want to know if "noble" is newer than "jammy", and useless if you
know you have 24.04 but have no idea what its codename is and
Android also sucks for developers because they have the public
facing numbers and then API versions which are different and not
always scaling linearly (sometimes there is something like
"Android 8.1" or "Android 12L" with a newer API), and as
developers you always deal with the API numbers (you specify
minimum API version, not the minimum "OS version" your code runs
in your code), and have to map that back to version numbers the
users and managers know to present it to them when you're upping
the minimum requirements...
happymellon wrote 11 hours 53 min ago:
> Ubuntu has alphabetical order too, but that's only useful if
you want to know if "noble" is newer than "jammy"
Well, it was until they looped.
Xenial Xerus is older than Questing Quokka. As someone out of
the Ubuntu loop for a very long time, I wouldn't know what
either of those mean anyway and would have guessed the age
wrong.
josephg wrote 17 hours 11 min ago:
> - Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit
63 then the CPU is affected". (There is no database for this you can
only find it out by actually booting one up).
Iâm doing some OS work at the moment and running into this. Iâm
really surprised thereâs no caniuse.com for cpu features. Iâm
planning on requiring support for all the features that have been in
every cpu that shipped in the last 10+ years. But itâs basically
impossible to figure that out. Especially across Intel and amd. Can I
assume apic? Iommu stuff? Is acpi 2 actually available on all CPUs or
do I need to have to have support for the old version as well? Itâs
very annoying.
opan wrote 2 hours 34 min ago:
CPU Monkey had some neat info like whether a CPU had AV1
hwdec/hwenc, then they redesigned their site and that info is gone
for some reason. I think it was a year or less between finding
their site and them ruining it. [1] [2] A nice reminder to stick
any page you find useful in the wayback machine and/or save a local
copy.
HTML [1]: https://web.archive.org/web/20250616224354/https://www.cpu...
HTML [2]: https://www.cpu-monkey.com/en/cpu-amd_ryzen_7_pro_8840u
ack_complete wrote 4 hours 45 min ago:
This is unfortunately the same for GPUs. The graphics APIs expose
capability bits or extensions indicating what features the hardware
and driver supports, but the graphics vendors don't always publish
documentation on what generations of their hardware support various
features, so your program is expected to dynamically adapt to
arbitrary combinations of features. This is no longer as bad as it
used to be due to consolidation in the graphics market, but people
still have to build ad-hoc crowd sourced databases of GPU caps
bits.
It's also not monotonic, on both CPU and GPU sides features can go
away later because either due to a hardware bug or the vendor lost
interest in supporting it.
bombcar wrote 6 hours 38 min ago:
Even defining "shipped in the last 10 years" is tricky - because
does that mean released or final shipment from the factory or ?
You're often better picking a subset of CPU features you want to
use and then sampling to see if it excludes something important.
josephg wrote 3 hours 47 min ago:
> then sampling to see if it excludes something important.
But how? Thatâs the question.
throw0101a wrote 11 hours 36 min ago:
> Iâm planning on requiring support for all the features that
have been in every cpu that shipped in the last 10+ years. But
itâs basically impossible to figure that out.
The easiest thing would probably to specify the need for
"x86-64-v3":
* [1] RHEL9 mandated "x86-64-v2", and v3 is being considered for
RHEL10:
> The x86-64-v3 level has been implemented first in Intelâs
Haswell CPU generation (2013). AMD implemented x86-64-v3 support
with the Excavator microarchitecture (2015). Intelâs Atom product
line added x86-64-v3 support with the Gracemont microarchitecture
(2021), but Intel has continued to release Atom CPUs without AVX
support after that (Parker Ridge in 2022, and an Elkhart Lake
variant in 2023).
*
HTML [1]: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_lev...
HTML [2]: https://developers.redhat.com/articles/2024/01/02/explorin...
nemetroid wrote 6 hours 7 min ago:
RHEL10 has been released and does require x86-64-v3.
HTML [1]: https://access.redhat.com/solutions/7066628
cesarb wrote 10 hours 47 min ago:
> The easiest thing would probably to specify the need for
"x86-64-v3"
AFAIK, that only specifies the user-space-visible instruction set
extensions, not the presence and version of
operating-system-level features like APIC or IOMMU.
johncolanduoni wrote 13 hours 18 min ago:
Even more fun is that some of those (IOMMU and ACPI version) depend
on motherboard/firmware support. Inevitably there is some
bargain-bin board for each processor generation that doesnât
support anything that isnât literally required for the
CPU/chipset to POST. For userspace CPU features the new
x86_64-v3/v4 profiles that Clang/LLVM support are good Schelling
points, but they donât cover e.g. page table features.
Windows has specific platform requirements they spell out for each
version - those are generally your best bet on x86. ARM devs have
it way worse so I guess we shouldnât complain.
throwaway173738 wrote 11 hours 24 min ago:
At least on ARM you can get trms or data sheets that cover all of
the features of a specific processor and also the markings on the
chip that differentiate it from models within the same family.
baq wrote 17 hours 4 min ago:
Iâm pretty sure the number of people at Intel who can tell you
offhandedly the answer to your questions about only Intel
processors is approximately zero give or take couple. Digging would
be required.
If you were willing to accept only the relatively high power
variants itâd be easier.
josephg wrote 12 hours 59 min ago:
I'd be happy to support the low power variants as well, but
without spending a bunch of money, I have no idea what features
they have and what they're missing. Its very annoying.
For anyone not familiar with caniuse, its indispensable for
modern web development. Say you want to put images on a web page.
You've heard of webp. Can you use it? [1] At a glance you see the
answer. 95% of global web users use a web browser with webp
support. Its available in all the major browsers, and has been
for several years. You can query basically any browser feature
like this to see its support status.
HTML [1]: https://caniuse.com/webp
jdiff wrote 12 hours 34 min ago:
That initial percentage is a little misleading. It includes
everything that caniuse isn't sure about. Really it should be
something like 97.5±2.5 but the issue's been stalled for
years.
Even the absolute most basic features that have been well
supported for 30 years, like the HTML "div" element, cap out at
96%. Change the drop-down from "all users" to "all tracked" and
you'll get a more representative answer.
7bees wrote 17 hours 14 min ago:
You can correlate microarchitecture to product SKUs using the Intel
site that the article links. AMD has a similar site with similar
functionality (except that AFAIK it won't let you easily get a list
of products with a given uarch). These both have their faults, but
I'd certainly pick them over an LLM.
But you're correct that for anything buried in the guts of CPUID,
your life is pain. And Intel's product branding has been a disaster
for years.
masklinn wrote 12 hours 43 min ago:
> You can correlate microarchitecture to product SKUs using the
Intel site that the article links.
Intel removed most things older than SB late 2024 (a few xeons
remain but afaik anything consumer was wiped with no warning).
Itâs virtually guaranteed that Intel will remove more stuff in
the future.
andrewf wrote 17 hours 39 min ago:
>"it affects Blizzard Creek and Windy Bluff models'
"Products formerly Blizzard Creek"
WTF does that even mean?
numpad0 wrote 15 hours 37 min ago:
It means Intel M14 and M15 base designs. Except they don't use
numbers.
baq wrote 17 hours 1 min ago:
Product lines are in design and development for years, two years is
lightning fast, code names can be found for things five or more
years before they were released, so everyone who works with them
knows them better (much better) than the retail names.
7bees wrote 17 hours 33 min ago:
Intel doesn't like to officially use codenames for products once
they have shipped, but those codenames are used widely to delineate
different families (even by them!), so they compromise with the
awkward "products formerly x" wording. Have done for a long time.
orthoxerox wrote 17 hours 3 min ago:
I wouldn't mind them coming up with better codenames anyway.
"Some lower-end SKUs branded as Raptor Lake are based on Alder
Lake, with Golden Cove P-cores and Alder Lake-equivalent cache
and memory configurations." How can anyone memorize this endless
churn of lakes, coves and monts? They could've at least named
them in the alphabetical order.
jorvi wrote 14 hours 0 min ago:
AMD does this subterfuge as well. Put Zen 2 cores from 2019 (!)
in some new chip packaging and sell it as Ryzen 10 / 100.
Suddenly these chips seem as fresh as Zen 5.
It's fraud, plain and simple.
tormeh wrote 15 hours 51 min ago:
The entire point of code names is that you can delay coming up
with a marketing name. If the end user sees the code name then
what is even the point? Using the code name in external
communication is really really dumb. They need to decide if it
should be printed on the box or if it's only for internal use,
and don't do anything in between.
adrian_b wrote 9 hours 14 min ago:
The problem, especially at Intel, but also at AMD, is that
they sell very different CPUs under approximately identical
names.
In a very distant past, AMD was publishing what the CPUID
instruction will return for each CPU model that they were
selling. Now this is no longer true, so you have to either
buy a CPU to discover what it really is, or to hope that a
charitable soul who has bought such a CPU will publish on the
Internet the result.
Without having access to the CPUID information, the next best
is to find on the Intel Ark site, whether the CPU model you
see listed by some shop is described for instance as
belonging to 'Products formerly Arrow Lake S", as that will
at least identify the product microarchitecture.
This is still not foolproof, because the products listed as
"formerly ..." may still be packaged in several variants and
they may have various features disabled during production, so
you can still have surprises when you test them for the first
time.
XCabbage wrote 19 hours 6 min ago:
How did the title end up wrong on HN (schemes vs scenes) and what's the
mechanism to get a mod to fix it?
tlb wrote 16 hours 47 min ago:
Fixed, thanks
rob74 wrote 16 hours 58 min ago:
I assume someone typed it in (possibly on a mobile device with
autocorrect) rather than copy & pasting it (which you would have to
do twice, for the URL and for the title).
yjftsjthsd-h wrote 18 hours 34 min ago:
> and what's the mechanism to get a mod to fix it?
Email them, address is in the guidelines.
johng wrote 19 hours 17 min ago:
This isn't that bad if you compare it to the USB naming fiasco... but
yeah, definitely a problem in the tech industry for a long time.
sofixa wrote 18 hours 50 min ago:
Not really comparable.
With Intel's confusing socket naming, you can buy a CPU that doesn't
fit the socket.
With USB, the physical connection is very clearly the first part of
the name. You cannot get it wrong. Yeah, the names aren't the most
logical or consistent, but USB C or A or Micro USB all mean specific
things and are clearly visibly different. The worst possible scenario
is that the data/power standard supported by the physical connection
isn't optimal. But it will always work.
GuB-42 wrote 7 hours 36 min ago:
It will always work if you want 500 mA at 5V and if 480 Mbps is
sufficient (assuming everything is USB2 compatible nowadays).
But sometimes the extra power or extra data transfer is not an
option. For charging a laptop for instance, you typically need 20V,
if your charger doesn't support that, you can't charge at all. And
then there is Thunderbolt, DisplayPort, Oculink, where the devices
that use these features won't work at all in an incompatible port.
And I am not aware of device that strictly requires one of the many
flavors of USB 3 or 4, but I can imagine a video capture card
needing that. Raw video requires a lot of bandwidth.
numpad0 wrote 10 hours 20 min ago:
Users aren't supposed to be (choosing && swapping) CPUs by
themselves between these identical sockets(LGA2011 v0 through v3).
These are supposed to be bought in trays and kitted in a shop. So
reusing same parts for cost saving should not cause issues.
Consumer oriented sockets(LGA115x) has different notches and pin
counts to prevent this issue - actually, some of "different"
sockets in consumer oriented sockets with "different" chipsets are
actually identical, and sometimes you see Chinese bastardized
boards that use discarded server-marked chips and pins-fudged
hacker builds online that should not be possible according to
marketing materials, so there is their own rabbit hole there.
halapro wrote 16 hours 51 min ago:
> But it will always work
Not at all. If you want to charge your phone, it might "always
work", but if you want to use your monitor with USB hub and pass
power to your MacBook, you're gonna have a hard time.
nativeit wrote 16 hours 38 min ago:
Look for the USB hub that costs several times more than the rest,
and thatâs the correct one for your use case.
halapro wrote 12 hours 9 min ago:
You're missing the point. Of course "the most expensive one"
will cover it, but price alone should not be a differentiator.
nottorp wrote 17 hours 10 min ago:
> the data/power standard supported by the physical connection
isn't optimal
How polite. It can be useless, not "not optimal". Especially since
usb-c can burn you on a combination of power and speed, not only
speed.
Arrowmaster wrote 18 hours 0 min ago:
I don't think the port names is what they were referring to.
The actual names for each data transfer level are an absolute mess.
1.x has Low Speed and Full Speed
2.0 added High Speed
3.0 is SuperSpeed (yes no space this time)
3.1 renamed 3.0 to 3.1 Gen 1 and added SuperSpeedPlus
3.2 bumped the 3.1 version numbers again and renamed all the
SuperSpeeds to SuperSpeed USB xxGbps
And finally they renamed them again removing the SuperSpeed and
making them just USB xxGbps
USB-IF are the prime examples of "don't let engineers name things,
they can't"
PunchyHamster wrote 15 hours 18 min ago:
> USB-IF are the prime examples of "don't let engineers name
things, they can't"
Engineers don't make names that are nice for marketing team.
But they absolutely do make consistent ones. The engineer
wouldn't name it superspeed, the engineer would encode the speed
in the name
zx8080 wrote 17 hours 40 min ago:
> USB-IF are the prime examples of "don't let engineers name
things, they can't"
While not disagreeing, I'd ask for a proof it's not a marketing
department's fun. Just to be sure.
Engineers love consistency. Marketing is on the opposite side of
this spectra.
LoganDark wrote 18 hours 4 min ago:
> But it will always work.
I can't find a USB-C PD adapter for a laptop that uses less than
100W. As a result, I can't charge a 65W laptop from a 65W port
because the adapter doesn't even work unless the port is at least
100W.
It does not always work.
zx8080 wrote 17 hours 33 min ago:
I've noticed that GAN PD's 100w and 65w adapters output is
actually less (both do not charge my laptop) than lenovo 65w
charger (the one with a non-detachable usbc cable). Cable does
not matter, tried with many of them including ones providing
power from other chargers.
It seems totally random, and you cannot rely on watts anymore.
SEMW wrote 14 hours 7 min ago:
> Cable does not matter, tried with many of them including ones
providing power from other chargers.
That might not necessarily be the right conclusion. My
understanding is: almost all USB-C power cables you will
enounter day to day support a max current of at most 3A (the
most that a cable can signal support for without an emarker).
That means that, technically, the highest power USB-PD profile
they support is 60W (3A at 20V), and the charger should detect
that and not offer the 65W profile, which requires 3.25A.
Maybe some chargers ignore that and offer it anyway, since
3.25A isn't that much more than 3A. For ones that don't and
degrade to offering 60W, if a laptop strictly wants 65W, it
won't charge off of them.
So it's worth acquiring a cable that specifically supports 5A
to try, which is needed for every profile above 60W (and such a
cable should support all profiles up to the 240W one, which is
5A*48V).
(I might be mistaken about some of that, it's just what I
cobbled together while trying to figure out what chargers work
with my extremely-picky-about-power lenovo x1e)
malfist wrote 17 hours 1 min ago:
There's a fair number of misleading our outright wrong specs if
your buying from amazon or the like. And even if you're buying
brand name, the specs can be misleading. They often refer to
the maximum output of all the ports, not the maximum output of
a port.
So a 100 watt GAN charger might be able to deliver only 65
watts from it's main "laptop" port, but it has two other ports
that can do 25 and 10 watts each. Still 100 watts in total, but
your laptop will never get it's 100 watts.
Not every brand is as transparent about this, sometimes it's
only visible in product marketing images instead of real specs.
Real shady.
unsnap_biceps wrote 17 hours 14 min ago:
I have a dell laptop that uses a usbc port to charge, but
doesn't actually use the PD specification, but a custom one, so
my 65w GAN charger falls back to 5v 0.5a and isn't useful at
all. I'd bet dollars to donuts that your Lenovo is doing
similar shit.
zx8080 wrote 14 hours 46 min ago:
No. It can charge from my monitor PD just fine.
And wow, I'll keep away from Dell, thanks.
seszett wrote 17 hours 48 min ago:
For this specific issue I'm surprised, I have used all kinds of
USB PD chargers for my laptops and all of them but one are less
than 100W, with no problem at all.
The ones I use most are 20W and 40W, just stuff I ordered from
AliExpress (Baseus brand I think).
dataflow wrote 18 hours 6 min ago:
> The worst possible scenario is that the data/power standard
supported by the physical connection isn't optimal. But it will
always work.
I don't know what "always work" means here but I feel like I've had
USB cables that transmit zero data because they're only for power,
as well as ones that don't charge the device at all when the device
expects more power than it can provide. The only thing I haven't
seen is cables that transmit zero data on some devices but nonzero
data on others.
dtech wrote 17 hours 40 min ago:
I don't think those cables are in spec, and there are a lot of
faulty devices and chargers that don't conform to the spec
creating these kinds of problem (e.g. Nintendo Switch 1). This is
especially a problem with USB C.
You can maybe blame USB consortium for creating a hard spec, but
usually it's just people saving $0.0001 on the BOM by omitting a
resistor.
DIR <- back to front page