The American way would probably be still using the units you listed but still meaning 1024, just to be confusing.
American here. This is actually the proper way. KB is 1024 bytes. MB is 1024 KB. The terms were invented and used like that for decades.
Moving to ‘proper metric’ where KB is 1000 bytes was a scam invented by storage manufacturers to pretend to have bigger hard drives.
And then inventing the KiB prefixes was a soft-bellied capitulation by Europeans to those storage manufacturers.
Real hackers still use Kilo/Mega/Giga/Tera prefixes while still thinking in powers of 2. If we accept XiB, we admit that the scummy storage vendors have won.
Note: I’ll also accept that I’m an idiot American and therefore my opinion is stupid and invalid, but I stand by it.
Calling 1048576 bytes an “American megabyte” might be technically wrong, but it’s still slightly less goofy-looking than the more conventional “MiB” notation. I wish you good luck in making it the new standard.
Kilo comes from greek and has meant 1000 for 1000’s of years. If you want 2^10 to be represented using greek prefixes, it better involve “deca” and “di”. Kilo (and di) would be usable for roughly 1.071508607186267 x 10^301 byte. KB was wrong when it was invented, but they were only wrong for decades at least.
Computers have ruled the planet for longer than the Greeks ever did. The history lesson is appreciated, but we’re living in the future, now, and the future is digital.
No the correct way is to use the proper fucking metric standard. Use Mi or Gi if you need it. We have computers that can divide large numbers now. We don’t need bit shifting.
The metric standard is to measure information in bits.
Bytes are a non-metric unit. Not a power-of-ten multiple of the metric base unit for information, the bit.
If you’re writing “1 million bytes” and not “8 million bits” then you’re not using metric.
If you aren’t using metric then the metric prefix definitions don’t apply.
There is plenty of precedent for the prefixes used in metric to refer to something other than an exact power of 1000 when not combined with a metric base unit. A microcomputer is not one one-thousandth of a computer. One thousand microscopes do not add up to one scope. Megastructures are not exactly one million times the size of ordinary structures. Etc.
Finally: This isn’t primarily about bit shifting, it’s about computers being based on binary representation and the fact that memory addresses are stored and communicated using whole numbers of bits, which naturally leads to memory sizes (for entire memory devices or smaller structures) which are powers of two. Though the fact that no one is going to do something as idiotic as introducing an expensive and completely unnecessary division by a power of ten for every memory access just so you can have 1000-byte MMU pages rather than 4096 also plays a part.
This is such a weird take to me. We don’t even colloquially discuss computer storage in terms of 1000.
The Greek terms were used from the beginning of computing and the new terms of kibi and mebi (etc.) were only added in 1998 when Members it the IEC got upset. But despite that, most personal computers still report in the binary way. The decimal is only used on boxes for marketing terms.
In general integer division is implemented using a form of long division, in binary. There is no base-10 arithmetic involved. It’s a relatively expensive operation which usually requires multiple clock cycles to complete, whereas dividing by a power of two (“bit shifting”) is trivial and can be done in hardware simply by routing the signals appropriately, without any logic gates.
In general integer division is implemented using a form of long division, in binary.
The point of my comment is that division in binary IS bitshifting. There is no other way to do it if you want the real answer. You can estimate, you can round, but the computational method of division is done via bitshifting of binarary expansions of numbers in an ALU.
American here. This is actually the proper way. KB is 1024 bytes. MB is 1024 KB. The terms were invented and used like that for decades.
Moving to ‘proper metric’ where KB is 1000 bytes was a scam invented by storage manufacturers to pretend to have bigger hard drives.
And then inventing the KiB prefixes was a soft-bellied capitulation by Europeans to those storage manufacturers.
Real hackers still use Kilo/Mega/Giga/Tera prefixes while still thinking in powers of 2. If we accept XiB, we admit that the scummy storage vendors have won.
Note: I’ll also accept that I’m an idiot American and therefore my opinion is stupid and invalid, but I stand by it.
Absolutely, I started computers in 1981, for me 1K is 1024 bytes and will always be. 1000 bytes is a scam
Calling 1048576 bytes an “American megabyte” might be technically wrong, but it’s still slightly less goofy-looking than the more conventional “MiB” notation. I wish you good luck in making it the new standard.
Kilo comes from greek and has meant 1000 for 1000’s of years. If you want 2^10 to be represented using greek prefixes, it better involve “deca” and “di”. Kilo (and di) would be usable for roughly 1.071508607186267 x 10^301 byte. KB was wrong when it was invented, but they were only wrong for decades at least.
Computers have ruled the planet for longer than the Greeks ever did. The history lesson is appreciated, but we’re living in the future, now, and the future is digital.
No the correct way is to use the proper fucking metric standard. Use Mi or Gi if you need it. We have computers that can divide large numbers now. We don’t need bit shifting.
The metric standard is to measure information in bits.
Bytes are a non-metric unit. Not a power-of-ten multiple of the metric base unit for information, the bit.
If you’re writing “1 million bytes” and not “8 million bits” then you’re not using metric.
If you aren’t using metric then the metric prefix definitions don’t apply.
There is plenty of precedent for the prefixes used in metric to refer to something other than an exact power of 1000 when not combined with a metric base unit. A microcomputer is not one one-thousandth of a computer. One thousand microscopes do not add up to one scope. Megastructures are not exactly one million times the size of ordinary structures. Etc.
Finally: This isn’t primarily about bit shifting, it’s about computers being based on binary representation and the fact that memory addresses are stored and communicated using whole numbers of bits, which naturally leads to memory sizes (for entire memory devices or smaller structures) which are powers of two. Though the fact that no one is going to do something as idiotic as introducing an expensive and completely unnecessary division by a power of ten for every memory access just so you can have 1000-byte MMU pages rather than 4096 also plays a part.
Or maybe metric should measure in Hartleys
Yes it does wtf?
This is such a weird take to me. We don’t even colloquially discuss computer storage in terms of 1000.
The Greek terms were used from the beginning of computing and the new terms of kibi and mebi (etc.) were only added in 1998 when Members it the IEC got upset. But despite that, most personal computers still report in the binary way. The decimal is only used on boxes for marketing terms.
Which ones?
Windows reports using binary and continues to use the Greek terms. Windows is still the holder of largest market share for PC operating systems.
Yeah well windows is a POS so
Hey how is “bit shifting” different then division? (The answer may surprise you).
Bit shifting works if you wanna divide by 2 only.
interesting, so does the computer have a special “base 10” ALU that somehow implements division without bit shifting?
In general integer division is implemented using a form of long division, in binary. There is no base-10 arithmetic involved. It’s a relatively expensive operation which usually requires multiple clock cycles to complete, whereas dividing by a power of two (“bit shifting”) is trivial and can be done in hardware simply by routing the signals appropriately, without any logic gates.
The point of my comment is that division in binary IS bitshifting. There is no other way to do it if you want the real answer. You can estimate, you can round, but the computational method of division is done via bitshifting of binarary expansions of numbers in an ALU.