I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
A lot of people are replying as if OP asked a question. It’s a link to a blog post explaining why a kilobyte is 1000 and not 1024 bytes (exactly as the title says!). OP knows the answer, in fact they know it so well they wrote an extensive post about it.
Thank you for the write up! You should re-check the spelling and grammar as some sections had some troubles. I have a sentence I need to go to the post to get, so let me edit this later!
Edit: the second half of this sentence is a mess: “The factors don’t solely consist of twos, but ten are certainly lot of them.” Otherwise nothing jumped out at me but I would reread it just in case!
I also assume that people are answering that way because they thought it was a question.
However, it’s also possible that they saw it described as a 20 minute read, and knew that the answer actually takes about 10 seconds to read, and figured that they’d save people 19 minutes and 50 seconds.
However, it’s also possible that they saw it described as a 20 minute read
Bit of a tangent and anecdotal, but I went back in to higher education a few years ago. I’m middle-aged, I was surrounded by younger people. We’re asked to read an article, everyone starts reading. I read it through, underline the important bits, I’m done reading. I look around. Everyone’s still reading. Oh well, they’ll be done soon. Nope. I think it took most of them 15 minutes to read an article I’d read in under 5. I was a bit perplexed. This is higher education, these aren’t idiots, these are people who should be able to read articles quickly.
There are plenty of reports of functional literacy decreasing. That children are slower at reading and are less able to understand what they’ve read. Anecdotally, it seems like younger generations really aren’t used to reading longer articles anymore. I grew up reading books as a kid. That’s what we did before phones and the internet. I wonder if younger generations simply don’t have that much experience reading, which is why it takes them so long to read, which is why they read even less.
In the case of this article, they see 20 minutes, they’re scared off. So they simply guess what was in the article. That’s pretty worrying if that’s what people do. If you’re unable or unwilling to read longer stuff, you’re likely to make ill informed choices or be more easily influenced.
TLDR: old person went back to school and reads faster than younger people, thinks younger people don’t know how to read quickly.
Bit ironic that you don’t seem to have read my comment properly.
Firstly, you missed the caveat about the example used being anecdotal.
Then you seem to have missed the bit about reports suggesting functional literacy is decreasing.
A quick google:
https://hechingerreport.org/americas-reading-problem-scores-were-dropping-even-before-the-pandemic/
https://hechingerreport.org/proof-points-why-reading-comprehension-is-deteriorating/That’s the joke, but ok.
I read slowly. It sucks, but it’s not from lack of experience or lack of education. Reading speed seems a weird metric to start wondering if people lack intelligence.
Being able to read quickly is a valuable skill. I don’t think I could handle jobs like editing, policy making, or lawyering simply because there are not enough hours in the day to make up for my reading deficit.
Of course, your anecdote is about a group, and mine is about one person. But the sweeping conclusion (if even it isn’t a firm one) on generations irks me. Every generation has its outliers. There will never be a generation without hardworking geniuses in every active field. As far as I know, you are an outlier in your generation, and the comparison simply fails. Maybe peers you knew personally didn’t get the cold judgment of intelligence by reading speed that you are applying to kids you don’t have a relationship with.
I don’t know. I will never dismiss the importance of reading. But you sound like Lucy here.
I read relatively slowly, but I have the ability to read much faster. I simply like reading more slowly. I have this weird suspicion that people who read very quickly are getting information more quickly, but that they’re either not absorbing it fully, or they’re not enjoying it as much as I do. But that’s obviously a biased perspective.
I had a literature professor who liked to say “Speed reading is for people who run through musuems.”
I’m middle-aged and read slowly. Explain that, asshole.
Ok boomer.
they see 20 minutes, they’re scared off
I’m not “scared off”. I’m on Lemmy to have discussions, not to read articles. If I want to read articles I’ll get a magazine.
Wait, you’re on a link aggregator platform and not interested in the links?
It’s true that the actual “story” is very short. 1 kB is 1000 bytes and 1 KiB is 1024 bytes. But the post is not about this, but about why calling 1024 a kilobyte always was wrong even in a historical context and even though almost everybody did that.
It’s true that the actual “story” is very short. 1 kB is 1000 bytes and 1 KiB is 1024 bytes. But the post is not about this, but about why calling 1024 a kilobyte always was wrong even in a historical context and even though almost everybody did that.
Yes. But it does raise the question of why you didn’t say that in either your title:
Why a kilobyte is 1000 and not 1024 bytes
or your description:
I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
The title and description were your two chances to convince people to read your article. But what they say is that it’s a 20 minute read for 10 seconds of information. There is nothing that says there will be historical context.
I get that you might want to make the title more clickbaitey, but why write a description out if you’re not going to tell what’s actually in the article?
So, that’s my feedback. I hope this helps.
One other bit of closely-related feedback, for your writing, in general. Always start with the most important part. Assume that people will stop reading unless you convince them otherwise. Your title should convince people to read the article, or at least to read the description. The very first part of your description is your chance to convince people to click through to the article, but you used it to tell an anecdote about why you wrote the article.
I’m the kind of person who often reads articles all the way through, but I have discovered that most people lose interest quickly and will stop reading.
I tried to make the title the exact opposite of clickbait. There are no unanswered questions on purpose. No “Find out if a kilobyte is 1024 bytes or 1000 bytes”. I think people are smart enough that I not just reiterate for 20min why a kilobyte is 1000 bytes but instead go into more details.
The main problem is probably that people won’t sacrifice 20min of there time on something they are not sure if it’s a good read but the only thing I can do is trying to encourage them to read it anyway.
There are not ads, no tracking, no cookies, no login, no newsletter, no paywall. I don’t benefit if you read it. I’d like to clear up misconceptions but I can’t force people to read it.
I don’t benefit if you read it.
You don’t benefit financially, but there are other benefits. For example, you specifically asked for feedback, and you have received some.
I don’t get feedback just because you read it. I’m thankful for feedback but my sentence was accurate. I don’t benefit if you read it.
Every part of your comment has something factually wrong or fallacious.
I don’t get feedback just because you read it.
My reading the part I am giving feedback on is a prerequisite for actually giving feedback. I am obviously a person who graciously responded to your request, not somebody that you somehow ordered to give feedback. I don’t know what you think you gain from viewing it this way.
I’m thankful for feedback but my sentence was accurate.
I didn’t say it was inaccurate, but that it didn’t tell people why to read the article. You didn’t ask me to tell you inaccuracies. You asked for “feedback”. You also don’t seem to be thankful, because if you were thankful, you’d simply accept the feedback instead of throwing up straw-man arguments.
I don’t benefit if you read it.
You have exactly repeated your previous statement that I already proved wrong.
I will offer you one last piece of feedback. Just stop arguing. You can never look gracious pursuing an argument where you ask for advice and then argue with people who took time out of their day to help you.
Upvotes and downvotes don’t determine whether people are factually right, but they do help you gauge what people think when they read your comments, and what I’m seeing is that you’re not ingratiating yourself to the people who you are asking to read your article. Even if you could win this argument, and you can’t, you wouldn’t want to, because you’d look bad in doing so. When you ask for feedback, and feedback is given, just graciously accept it. If it’s bad feedback, then just ignore it.
But that’s also a simple answer: kilo is a metric prefix that means 1000, so kilobyte means 1000 bytes. The historical context is the history of the metric system, which is much older than modern computers.
Thank you very much. I’ll try to fix that sentence later. I’m not a native speaker so it’s not always obvious for me when a sentence doesn’t sound right even though I pass sentences I’m not sure about through spell checks, MS Word grammar check and chat gpt 🤣
This is a great example of how a lot of people dont read the posts they are replying to.
This is even more prevalent when arguments break out in the comments where people misunderstand each other or argue about things that one side said that they qualified later in the original comment but the other side didnt read the whole comment and instead hyperfocused on that one sentence that really garbled their goolies.
I trust that none of these people would have read the article even if they had realised it was there.
P.s. i fully agree with you. It’s a great blog post. Good write-up. Very informative. The only quibble i have is that I’ve always loved the words mebibyte, gibibyte, etc.
OP asked for feedback.
A lot of people are replying as if OP asked a question.
I think part of that is because outgoing links without a preview image are really easy to confuse with text-only posts, particularly because Reddit didn’t allow adding both a text and a link simultaneously. Though in this case the text should’ve tipped people off that there’s a link as well.
As for the actual topic, I agree with OP. I often forget to do it right when speaking, but I try to at least get it right when writing.
Well it’s because computer science has been around for 60+ years and computers are binary machines. It was natural for everything to be base 2. The most infuriating part is why drive manufacturers arbitrarily started calling 1000 bytes a kilobyte, 1000 kilobytes a megabyte, and 1000 megabytes a gigabyte, and a 1000 gigabytes a terabyte when until then a 1 TB was 1099511627776 bytes. They did this simply because it made their drives appear 10% bigger. So good ol’ shrinkflation. You could make drives 10% smaller and sell them for the same price.
If a hard drive has exactly 8’269’642’989’568 bytes what’s the benefit of using binary prefixes instead of decimal prefixes?
There is a reason for memory like caches, buffer sizes and RAM. But we don’t count printer paper with binary prefixes because the printer communication uses binary.
There is no(!) reason to label hard drive sizes with binary prefixes.
It more accurately describes how much space you have and how you can expect to see it shown in your software when you actually install it somewhere.
So here’s the thing. I don’t necessarily disagree with you. And if this had done from the start it would never had been a problem. But it wasn’t and THAT is what caused the confusion. You put a lot of thought and research into your post and I can very much respect that. It’s something you feel strongly about and you took the time to write about your beef with this. IEC changed the nomenclature in the late 90s. But the REASON they changed it was to avoid the confusion caused by the drive manufacturers (I bet you can guess who was in the committee that proposed the change).
But I can tell you as a professional IT person we never really expect any drive (solid state or otherwise) to be any specific size. RAID, file system overhead, block size fragmentation, etc all take a cut. It’s basically just bistromathics (that’s a Hitchhiker’s reference) and the overall size of any storage system is only vaguely related to actual drive size.
So I just want to basically apologize for being so flippant before. It’s important enough to you that you took the time to write this. It’s just that I’m getting rather cynical as I get older and just expect the enshittification of every to continue ad infinitum on everything digital.
Pretty obvious that you didn’t read the article. If you find the time I’d like to encourage you to read it. I hope it clears up some misconceptions and make things clearer why even in those 60+ years it was always intellectually dishonest to call 1024 byte a kilobyte.
You should at least read “(Un)lucky coincidence”
Ok so I did read the article. For one I can’t take an article seriously that is using memes. Thing the second yes drive manufacturers are at fault because I’ve been in IT a very very long time and I remember when HD manufacturers actually changed. And the reason was greed (shrinkflation). I mean why change, why inject confusion where there wasn’t any before. Find the simplest least complex reason and that is likely true (Occam’s razor). Or follow the money usually works too.
It was never intellectually dishonest to call it a kilobyte, it was convenient and was close enough. It’s what I would have done and it was obviously accepted by lots of really smart people back then so it stuck. If there was ever any confusion it’s by people who created the confusion by creating the alternative (see above).
If you wanna be upset you should be upset at the gibi, kibi, tebi nonsense that we have to deal with now because of said confusion (see above). I can tell you for a fact that no one in my professional IT career of over 30 years has ever used any of the **bi words.
You can be upset if you want but it is never really a problem for folks like me.
Hopefully this helps…
Pushing 30 years myself and I confirm literally not a single person I’ve worked with has ever used **bi… terms. Also, I recall the switch where drive manufacturers went from 1024 to 1000. I recall the poor attempt from shill writers in tech saying it better represents the number of bits as the format parameters applied to a drive changes the space available for files. I recall exactly zero people buying that excuse.
Old IT represent!! 😂
I just think that kilobyte should have been 1000 (in binary, so 16 in decimal) bytes and so on. Just keep everything relating to the binary storage in binary. That couldn’t ever become confusing, right?
Because your byte is 10 decimal bits, right? EDIT: Bit is actually an abbreviation, BIT, initially, so it would be what, DIT?.. Dits?..
kilobit = 1000 bits. Kilobyte = 1000 bytes.
How is anything about that intellectually dishonest??
The only ones being dishonest are the drive manufacturers, like the person above said. They sell storage drives by advertising them in the byte quantity but they’re actually in the bit quantity.
They sell storage drives by advertising them in the byte quantity but they’re actually in the bit quantity.
No, they absolutely don’t. That’d be off by 8x.
The subject at hand has nothing to do with bits. Please, read what OP posted. It’s about 1024 vs 1000
Calling 1024 a kilo is intellectually dishonest. Your conversation is perfectly fine.
I genuinely don’t understand your disdain for using base 2 on something that calculates in base 2. Do you know how counting works in binary? Every byte is made up of 8 bits, and goes from 0000 0000 to 1111 1111, or 0-15. When converted to larger scales, 1024 bytes is a clean mathematical derivation in base 2, 1000 is a fractional number. Your pedantry seems to hinge on the use of the prefix right? I think 1024 is a better representation of kilo- in base 2, because a kilo- can be directly translated up to exabytes and down to nybbles while “1000” in base 2 is extremely difficult. The point of metric is specifically to facilitate easy measuring, right? So measuring in the units that the computer uses makes perfect sense. It’s like me saying that a kilogram should be measured in base 60, because that was the original number system.
TLDR: the problem isn’t using base 2 multipliers. The problem is doing so then saying it’s a base 10 number
In 1998 when the problem was solved it wasn’t a big deal, but now the difference between a gigabyte and a gibibyte is large enough to cause problems
Using kilo- in base 2 for something that calculates in base 2 simply makes sense to me. However, like I said to OP, ultimately this debate amounts to rage bait for nerds. All I ask is that I’m not pedantically corrected if the conversation isn’t directly related to kibi- vs kilo-
Did you read the post? The problem I have is redefining the kilo because of a mathematical fluke.
You certainly can write a mass in base 60 and kg, there is nothing wrong about that, but calling 3600 gramm a “kilogram” because you think it’s convenient that 3600 (60^2) is “close to” 1000 so you just call it a kilogram, because that’s exactly what’s happening with binary and 1024.
If you find the time you should read the post and if not at least the section “(Un)lucky coincidence”.
I started reading it, but the disdain towards measuring in base 2 turned me off. Ultimately though this is all nerd rage bait. I’m annoyed that kilobytes aren’t measured as 1024 anymore, but it’s also not a big deal because we still have standardized units in base 2. Those alternative units are also fun to say, which immediately removes any annoyance as soon as I say gibibyte. All I ask is that I’m not pedantically corrected if the discussion is about something else involving amounts of data.
I do think there is a problem with marketing, because even the most know-nothing users are primed to know that a kilobyte is measured differently from a kilogram, so people feel a little screwed when their drive reads 931GiB instead of 1TB.
Yeah I’m with you, I read most of it but I just don’t know where the disdain comes from. At most scales of infrastructure anymore you can use them interchangeably because the difference is immaterial in practical applications.
Like if I am going to provision 2TB I don’t really care if it’s 2000 or 2048GB, I’ll be resizing it when it gets to 1800 either way, and if I needed to actually store 2TB I would create a 3TB volume, storage is cheap and my time calculating the difference is not.
Wait until you learn about how different fields use different precision levels of pi.
It’s not 2000 Vs 2048. It’s 1,862 Vs 2048
The GB get smaller too.
I was confused when I just read the headline. Should be “Why I (that would be you not me) think a kilobyte should be 1000 instead of 1024”. Unpopular opinion would be a better sub for it.
You should read the blog post. It’s not a matter of option.
It totally is a matter of opinion. These are arbitrary rules, made up by us. We can make up whatever rules we want to.
I agree that it’s weird that only in CS kilo means 1024. It would be logical to change that, to keep consistency across different fields of science. But that does not make it any less a matter of opinion.
You can’t store data in base 10, nor address memory or storage in base 10 given present computers. It’s a bit more than a matter of opinion that computers are base 2
Yes computers are base 2 but we can still make up whatever rules we want about them. We could even make up rules that say that we are to consider everything a computer does to be in base 10 but it can only use the lowest 2 values of any given digit. It would be a total mess and it would make no sense whatsoever but we could define those rules.
Just because you wrote about a topic doesn’t mean you’re suddenly the authority figure lol.
I know there is no option as 1024 is what the standard is now. Im not reading that anymore than someone saying how a red light really means go.
1024 is not the standard. The standard term for 1024 is “kibi” or “Ki” and the standard term for 1000 is “kilo” and has been since the year 1795.
There was a convention to use kilo for 1024 in the early days of computing since the “kibi” term didn’t exist until 1998 (and took a while to become commonly used) — but that convention was always recognised as an incorrect use of the term. People just didn’t care much especially since kilobytes were commonly rounded anyway. A 30,424 byte file is 29.7109375 kibibytes or 30.424 kilobytes… both will likely be rounded to 30 either way, so who cares if it’s slightly wrong? Just use bytes if you need to know the exact size.
Also - hard drives, floppy disks, etc have always referred to their size in base 1000 numbers so if you were working with 30KB in the early days of computers it was very rarely RAM. A PDP-11 computer, for example, might have only had 8196 bytes of RAM (that’s 8 kibibytes).
There are some places where the convention is still used and it can be pretty misleading as you work with larger numbers. For example 128 gigs equals 128,000,000,000 bytes (if using the correct 1000 unit) or 137,438,953,472 bytes (if kilo/mega/giga = 1024).
The “wrong” convention is commonly still used for RAM chips. So a 128GB RAM chip is significantly larger than a 128GB SSD.
I’ve never met anyone that actually uses the new prefixes for 1024 and the old prefixes to mean 1000
Also - hard drives, floppy disks, etc have always referred to their size in base 1000 numbers
That is not true. For a long time everything (computer related) was in the base 2 variants. Then the HD manufacturers changed so their drives would appear larger than they actually were (according to everyone’s notions of what kn/mb/gb meant). It was a marketing shrinkflation stunt.
Here’s my favorite part.
“In addition, the conversions were sometimes not even self-consistent and applied completely arbitrary. The 3½-inch floppy disk for example, which was marketed as “1.44 MB”, was actually not 1.44 MB and also not 1.44 MiB. The size of the double-sided, high-density 3½-inch floppy was 512 bytes per sector, 18 sectors per track, 160 tracks, that’s 512×18×16 = 1’474’560 bytes. To get to “1.44” you must first divide 1’474’560 by 1024 (“bEcAuSE BiNaRY obviously”) to get 1440 and then divide by 1000 for perfect inconsistency, because dividing by 1024 again would get you an ugly number and we definitely don’t want that. We finally end up with “1.44”. Now let’s add “MB” because why the heck not. We already abused those units so much it’s not like they still mean anything and it’s “close enough” anyways. By the way, that “close enough” excuse never “worked when I was in school but what would I know compared to the computer “scientists” back then.
When things get that messy, numbers don’t even mean anything any more. Might as well just label the products using entirely qualitative terms like “big” or “bigger”.
❤️ Thank you for taking the time to read it.
Thanks for this article. Unfortunately, you used the word “prefix” when you really meant “unit symbol”. So, “kilo” and “mega” are prefixes, kB and MB are unit symbols. You repeatedly called the latter “prefixes”.
Thank you for the feedback. I know that only the “first” part is the prefix and I tried to be careful to not use it wrong. I just checked all 53 instances of “prefix” and I don’t see a wrong one, but to be fair there are situations that could be misunderstood easily like here:
Today the only correct conversions are to either use SI prefixes (like 1 MB = 1000² bytes) or binary prefixes (1 MiB = 1024² bytes).
But with prefix I only meant the “M” and “Mi” part and they are both prefixes.
I’ll try to clarify that later so the difference is clear to all readers. Thank you.
Ok, I understand what you are trying to do, but I that is not how I read it at the time. Prefix to me in this context means e.g., “kilo” in “kilobyte”, and not the “k” in “kB”. I am not sure it is helpful to split the unit symbol up like that.
In terms of language you are correct. But in terms of SI usage it seems to me OP is expressing it correctly. The SI unit prefixes have a name, a symbol and a multiplier. The prefix is a concept that encompasses all three of those attributes. So “kilo” is one way of identifying the 10^3 unit prefix, but the name kilo is not the prefix itself. It’s just the name we use to refer to it. And the symbol k in km is certainly the unit prefix portion of that unit of measure.
But the first part is called prefix even in the standard itself. I wanted to make that distinction because it’s not important what the base unit is. By speaking about prefixes instead of the unit as a whole I wanted to make it clear that you can (at least in theory) use any base unit. So everything I said about KiB and kB is also true for Kib and kb and even for kK (kilokelvin) and KiB (kibikelvin) 🤣
While we’re nitpicking, the post says multiple times that SI prefix symbols are “all uppercase except for kilo (k)”.
That’s just factually wrong. More than half of them are lowercase! There’s centi- ©, micro- (µ), nano- (n), etc. On the positive side there’s even deca- (da) and hecto- (h), though they aren’t particularly common or useful. I did at least see milli- (m) and bit (b) mentioned in a brief note though.
Obviously context matters and only the positive powers from kilo upward are relevant in computer science. But I studied chemistry and physics so I guess it irked me to see the statement repeatedly ignore all the negative powers of ten.
Overall, good rant though 😅 I’ll be more careful to use KiB and MiB from here out when appropriate.
❤️ Thank you for taking the time to read it. And thank you so much for pointing that out, you are completely right and I totally didn’t think about that while writing the article, probably because negative exponents are pretty rare in computer science (as in milli-bytes, etc.). I’ll fix that in a few days. Thanks again for pointing that out.
Removed by mod
I’m not sure if I’m too stupid, but how so?
Removed by mod
when you format a 256GB drive and find out that you don’t actually have 256GB
Most of the time you have at least 256GB. It’s just you 256GB=238.4GiB, and windows reports GiB but calls them GB. You wouldn’t have that problem in Mac OS that counts GB properly, or gnome that counts GiB and calls them GiB.
(This is ignoring the few MB that takes to format a drive, but that’s also space on the disk and you’re the one choosing to partition and format the drive. If you dumped a file straight into the drive you’d get that back, but it would be kind of inconvenient)
Here’s the summary for the wikipedia article you mentioned in your comment:
Both the British imperial measurement system and United States customary systems of measurement derive from earlier English unit systems used prior to 1824 that were the result of a combination of the local Anglo-Saxon units inherited from Germanic tribes and Roman units. Having this shared heritage, the two systems are quite similar, but there are differences. The US customary system is based on English systems of the 18th century, while the imperial system was defined in 1824, almost a half-century after American independence.
So why don’t they just label drives in Terabit instead of terabyte. The number would be even bigger. Why don’t Europeans also use Fahrenheit, with the bigger numbers the temperature for sure would instantly feel warmer 🤣
Jokes aside. Even if HDD manufacturers benefit from “the bigger numbers” using the 1000 conversation is the objectively only correct answer here, because there is nothing intrinsically base 2 about hard drives. You should give the blog post a read 😉
there is nothing intrinsically base 2 about hard drives
did you miss the part where those devices store binary data?
Binary prefixes (the ones with 1024 conversations) are used to simplify numbers that are exact powers of two - for example RAM and similar types of memory. Hard drive sizes are never exact powers of two. Disk storing bits don’t have anything to do with the size of the disk.
sure, but one of the intrinsic properties of binary data is that it is in binary sized chunks. you won’t find a hard drive that stores 1000 bits of data per chunk.
The “chunk” is often 32,768 bits these days and it never matches the actual size of the drive.
A 120 GB drive might actually be closer to 180 GB when it’s brand new (if it’s a good drive - cheap ones might be more like 130 GB)… and will get smaller as the drive wears out with normal use. I once had a HDD go from 500 GB down to about 50 GB before I stopped using it - it was a work computer and only used for email so 50 GB was when it actually started running out of space.
HDD / SSD sellers are often accused of being stingy - but the reality is they’re selling a bigger drive than what you’re told you’re getting.
Look up the exact number of bytes and then explain to me what the benefits are of using 1024 conversations instead of 1000 for a hard drive?
SSDs are.
Not even SSDs are. Do you have an SSD? You should lookup the exact drive size in bytes, it’s very likely not an exact power of two.
there is nothing intrinsically base 2 about hard drives
Yes there is. The addressing protocol. Sectors are 512 (2⁹) bytes, and there’s an integer number of them on a drive.
That’s true but the entire disk size is not an exact power of two that’s why binary prefixes (1024 conversation) don’t have any benefit whatsoever when it comes to hard drives. With memory it’s a bit different because other than with storage devices RAM size is always exactly a power of two.
Kilo = 1000
Byte = Byte
Kilobyte = 1000 bytes
Kibibyte = 1024 bytes
Byte = 8 Bits?
Yes, that is what a byte is.
This is why I only use nibbles. At least it’s not spelled funny. But, unfortunately, it sounds like dogfood… Kibinibbles.
You asked for feedback, so here is my feedback:
The article is okay. I read most of it, but not all of it, because it seemed overly worded for the sentiment. It could have been condensed quite a bit. I would argue the focus should be more on the fact that there should be a standard in technical documentation, OS’s, specification sheets, etc. That’s the part that impacts most people, and the reason they should care. But that kind of gets lost in all the text.
Your replies here come off as pretty condescending. You should anticipate most people not reading the article before commenting. Just pay them no attention, or reiterate what you already stated in the article. You shouldn’t just say “did you read the article” and then “it’s in this section of the article”. Just like how people comment on youtube before watching the video, people will comment on the topic without reading the article.
Maybe they didn’t realize it was an article, maybe they knew it was an article and chose not to read it, or maybe they read it and disagree with some of the things you said. It’s okay for people to disagree with something you said, even if you sincerely believe something you said isn’t a matter of opinion (even though it probably is). You can agree to disagree and move on with your life.
Thank you for taking the time to read it and your feedback.
Your replies here come off as pretty condescending.
That was definitely never my intention but a lot of people here said something similar. I should probably work on my English (I’m not a native speaker) to phrase things more carefully.
You shouldn’t just say “did you read the article” and then “it’s in this section of the article”
It never crossed my mind this could be interpreted in a negative way. I tried to gauge if someone read it and still disagreed or if someone didn’t read it and disagrees, because those situations are two different things, at least for me. The hint with the sections was also meant as a pointer because I know that most people won’t read the entire thing but maybe have 5min on their hand to read the relevant section.
Most native English speakers tend to take blunt statements/questions negatively due to the culture (especially true in north America).
I enjoyed reading the article but I would agree with the above commenter that it may be a bit lengthy. Generally speaking writing tends to be more engaging in this format if it’s a bit more concise, both as a whole and on a per sentence basis.
There was also a typo somewhere, I think “the” instead of another word, I read the article a few hours ago now so I can’t remember, sorry. I don’t think I would have guessed you were not a native English speaker from the article. Overall, I liked it and congratulations on putting something out there!Thank you for taking the time to read it ❤️. I’m currently out of office I’ll try to find and fix the typo you mentioned once I’m back, thanks for pointing it out.
I feel bad for you OP, I get this a lot and I’m totally gonna go there because I feel your pain and your article was fantastic! I read almost every word ;p
This phenomena stems from an aversion to high-confidence people who make highly logical arguments from low self-confidence people who basically make themselves feel unworthy/inadequate when justly critiqued/busted. It makes sense for them to feel that way too, I empathize. It’s hard to overcome the vapid rewarding and inflation in school. They should feel cheated and insolent at this whole situation.
I’ll be honest in front of the internet; people (in majority mind you, say 70-80% of Americans, I’m American) do not read every word of the article with full attention because of ever present and prevelant distractions, attention deficit, and motivation. They skip sentences or even paragraphs of things they are expecting they already know, apply bias before the conclusion, do not suspend their own perspective to understand yours for only a brief time, and come from a skeptical position no matter if they agreed with it or not!
In general, people also want to feel they have some valid perspective “truth” (as it’s all relative to them…) of their own to add and they want to be validated and acknowledged for it, as in school.
Guess what though, Corporations, Schools, Market Analysis, Novelists, PR people, Video Game Makers, Communications Managers and Small and Medium Business already know this! They even take a much more, ehh, progressive? approach about it, let’s say. That is, to really not let them speak/feedback, at all. Nearly all comment sections are gone from websites, comment boxes are gone from retail shops, customer service is a bot, technical writers make videos now to go over what they just wrote, Newspapers write for 4th graders, etc., etc.
Nothing you said is even remotely condescending and nothing you said was out of order. Don’t defend yourself in these situations because it’s just encouragement for them to do it again. Don’t take it personally yourself, that is just the state of things.
Improvise, Adapt, Re-engineer, Re-deploy, Overcome, repeat until done.
TL;DR?
“I am smart.”… “Most people have an attention span the length of a yo mama joke.”… “Ramble ramble yada yada yada.”
where do you get off
Imagine getting your weak ass argument destroyed by ChatGPT
lol, I didn’t know you could share chatGPT responses
I know it’s already been explained but here is a visualization of why.
0 2 4 8 16 32 64 128 256 512 1024
Did you read the blog post? If you don’t find the time you should at least read “(Un)lucky coincidence” to see why it’s not (and never was) a bright idea to call 1024 “a kilo”.
No we didn’t read your click bait and have no interest in doing so.
Zeta is also another word for a zoophile hahaha
Dude you’re pretty condescending for a new author on an old topic.
Yeah I read it and it’s very over worded.
1024 was the closest binary approximation of 1000 so that became the standard measurement. Then drive manufacturers decided to start using decimal for capacity because it was a great way to make numbers look better.
Then the IEC decided “enough of this confusion” and created binary naming standards (kibi gibi etc…) and enforced the standard decimal quantity values for standard names like kilo-.
It’s not ground breaking news and your constant arguing with people in the thread paints you as quite immature. Especially when plenty of us remember the whole story BECAUSE WE LIVED IT AS IT PROFESSIONALS.
We lacked a standard, a system was created. It was later changed to match global standard values.
You portray it with emotive language making decisions out to be stupid, or malicious. A decision was made that was perfectly sensible at the time. It was then improved. Some people have trouble with change.
Your writing and engagement styles scream of someone raised on clickbait news. Focus on facts, not emotion and sensationalism if you want to be taken seriously in tech writing.
Focus on emotion and bullshit of you want to work for BuzzFeed.
And if you just want an argument go use bloody twitter.
A kilobyte (kB) is 1000 bytes, that’s what the prefix kilo means. A kibibyte (KiB) is 1024 bytes (the “bi” in the prefix means base 2 or binary). People often confuse them, but they’re similar enough for smaller units, 10^3 ~ 2^10.
Oh and at first, kilobyte was used for both amounts, which is why kibibytes were introduced to fix the confusion, which perhaps was a bit late anyway.
True and that’s what the article is about. You should check out the interactive diagram in the “(Un)lucky coincidence” section.
you can’t ask for feedback, then attack everyone who doesn’t share your opinion with “did you read it?”, that’s not cool…
I still don’t get how “did you read it?” is attacking anyone? It’s true I asked for feedback but I’m a bit overwhelmed that I had to clarify that I’m interested in feedback about the post from people who actually read it.
“the tone makes the music” as the Germans would say. you’re asking for volunteer help and are rude to the ones replying
The mistake is thinking that a 1000 byte file takes up a 1000 bytes on any storage medium. The mistake is thinking that it even matters if a kB means 1000 or 1024 bytes. It only matters for some programmers, and to those 1024 is the number that matters.
Disregarding reality in favor of pedantics is the real mistake.
I dunno it makes up a few gigabytes of lost storage on a terrabyte hard drive.
I suggest considering this from a linguistic perspective rather than a technical perspective.
For years (decades, even), KB, MB, GB, etc. were broadly used to mean 2^10, 2^20, 2^30, etc. Throughout the 80s and 90s, the only place you would likely see base-10 units was in marketing materials, such as those for storage media and modems. Mac OS exclusively used base-2 definitions well into the 21st century. Windows, as noted in the article, still does. Many Unix/POSIX tools do, as well, and this is unlikely to change.
I will spare you my full rant on the evils of linguistic prescriptivism. Suffice it to say that I am a born-again descriptivist, fully recovered from my past affliction.
From a descriptivist perspective, the only accurate way to define kilobyte, megabyte, etc. is to say that there are two common usages. This is what you will see if you look up the words in any decent dictionary. e.g.:
- https://www.dictionary.com/browse/kilobyte
- https://www.merriam-webster.com/dictionary/kilobyte
- https://en.wiktionary.org/wiki/kilobyte
I don’t recall ever seeing KiB/MiB/etc. in the 90s, although Wikipedia tells me they “were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard”.
While I wholeheartedly agree with the goal of eliminating ambiguity, I am frustrated with the half-measure of introducing unambiguous terms on one side (KiB, MiB, etc.) while failing to do the same on the other. The introduction of new terms has no bearing on the common usage of old terms. The correct thing to have done would have been to introduce two new unambiguous terms, with the goal of retiring KB/MB/etc. from common usage entirely. If we had KiB and KeB, there’d be no ambiguity. KB will always have ambiguity because that’s language, baby! regardless of any prescriptivist’s opinion on the matter.
Sadly, even that would do nothing to solve the use of common single-letter abbreviations. For example, Linux’s
ls -l -h
command will return sizes like 1K, 1M, 1G, referring to the base-2 definitions. Only if you specify the non-default--si
flag will you receive base-10 values (again with just the first letter!). Many other standard tools have no such options and will exclusively use base-2 numbers.Here’s the summary for the wikipedia article you mentioned in your comment:
In the study of language, description or descriptive linguistics is the work of objectively analyzing and describing how language is actually used (or how it was used in the past) by a speech community.All academic research in linguistics is descriptive; like all other scientific disciplines, it seeks to describe reality, without the bias of preconceived ideas about how it ought to be. Modern descriptive linguistics is based on a structural approach to language, as exemplified in the work of Leonard Bloomfield and others. This type of linguistics utilizes different methods in order to describe a language such as basic data collection, and different types of elicitation methods.
Because SI prefixes are always powers of the base. Base 10 is the most common, but that’s more human psychology that math.
SI prefixes are literally just base ten and not really about human psychology.
Please count your fingers and reevaluate your response.
Bold of you to assume they had 10 fingers
I think they mean it’s easier to refer to powers of 1000 with the SI units, rather than of 1024 as with Kibi and the lot. Especially higher up in the prefixes, because it starts to diverge more and more from the expected value.