hckrnws
I think the article is wrong in its core premise. While the electrons get added or removed from the floating gate, the total number of electrons in the SSD chip stays the same. Gates are capacitors, in order to add electrons to one capacitor plate, you have to remove an equal numbers of electrons from the other plate, i. e. from the transistor channel. The net charge of a SSD chip is always zero. Otherwise it would just go bang. <s>2.43×10^-15</s> [my bad 1] 2.67×10^15 electrons is about 300µC - that's a lot of charge to separate macroscopically.
Therefore the mass (weight is a different thing, through it is proportional to mass at a given constant gravity potential) of the data on a SSD isn't fundamentally different from a HDD - they both are caused by a change of internal energy without any change in the number of fermions. I'd expect data on SSD to have larger mass change because a charged capacitor always store more energy than a discharged one, while energy of magnetic domains is less directional and depends mostly on the state of neighbor domains - but I'm not sure about this part.
[1] Thanks stackghost.
> So, assuming the source material is correct and electrons indeed have mass, SSDs do get heavier with more data.
That is definitely wrong! No way the source material has more electrons. The only way it could do that is by being charged.
Richard Feynman, The Feynman Lectures: "If you were standing at arm's length from someone and each of you had one percent more electrons than protons, the repelling force would be incredible. How great? Enough to lift the Empire State Building? No! To lift Mount Everest? No! The repulsion would be enough to lift a "weight" equal to that of the entire earth!"
From: https://tycho.parkland.edu/cc/parkland/phy142/summer/lecture...
Not sure I'd survive that experiment.
See, now, if this was Reddit...this is the opportunity for a yo momma joke. But here we are on HN, so I'll just point out that this is the opportunity for a yo momma joke.
Yo mamma is so fat she broke the Coulomb barrier?
Exactly. On top of that, most managed flash (which is equivalent to SSD controllers) will pass all write through a modified cyclic XOR pad in order to keep the /bit/ entropy high. I don’t think the article holds on multiple abstraction layers.
Which is the same reason storing data to a HDD doesn't add weight. You can pack the data tighter if you are writing basically balanced 1s and 0s. Thus you can pack more bytes into a given area by encoding them into patterns with even distributions even though that means you need to write more bits.
But SSD erasing must write a constant (either one or zero). So an erased ready-to-write SSD block will have consistently different energy than one written with a random scrambled pattern. Same for SMR HDDs - but not for CMR.
>2.43×10^-15 electrons
I believe TFA reads 2.43×10^-15 kg, not electrons. Unless SSDs are creating new and exciting physics, one can't have less than one electron, as it's an elementary particle.
Well you could have a virtual particle whose mass could be time-averaged.
Neutrinos weight far less than electrons (but while NAND flash involves super weird physics it's not that weird)
They do weigh far less, but a quantity of "10^-15 electrons" is still impossible.
10^–15 is not a negative number, just a small one. https://www.wolframalpha.com/input?i=10%5E-15+
And it is less than one?
I think my favorite part of that comment is "documenting" that 10^(-15) is not negative by appealing to Wolfram Alpha.
your user name is found at the 4,922,096,564th digit of Pi
Yes, you're correct. Now ask yourself if "one quadrillionth of an electron" is a quantity that's possible to have.
Good thing he didn't say that
Another bit I’m surprised seems to have gotten completely glossed over: there is a deep relationship between _entropy_ and mass which puts bounds on the amount of information you can place in a given volume.
TLDR: a given region of space can’t have more entropy than a black hole of the same volume. Rearranging terms, you find that N bits of information (for large N) has an equivalent black hole size, which in turn has a mass…
> energy of magnetic domains is less directional and depends mostly on the state of neighbor domains
Yes, but it's the same thing. The flux changes on the drive define the bits. It's probably true that a drive storing all 1's or all 0's would be quantitatively (but surely immesurably) lighter. But in practice a drive storing properly compressed high-entropy data is going to see a flux change every other bit on average. And all of those are regions of high magnetic field with calculable energy density. Same deal as charge in a capacitor, which also stores energy in the field.
TFA started out seeming well enough written but definitely turned LLM-padded in the middle. And yeah, I think you're right about the actual science.
Reminds me of an old April Fools' prank in German c't magazine. They offered a defragmentation-like tool for HDDs that claimed to distribute 0s and 1s more evenly on the drive to make it run more smoothly and extend its lifespan.
Amusingly, that's unnecessary, but possibly not for the reason most people think. It's not because the hard drive hardware is oblivious to runs of 0s and 1s exactly... it's because it's actually so sensitive that it already is recording the data in an encoding that doesn't allow for long runs of 0s and 1s. You can store a big file full of zeros on your disk and the physical representation will be about 50/50 ones and zeros on the actual storage substrate already. Nothing you do at the "data" layer can even create large runs of 0s or 1s on the physical layer in the first place. See https://www.datarecoveryunion.com/data-encoding-schemes/
Well it depends when it was claimed.
I imagine MFM drives from 1985 might be a bit different from drives that are billions of times more data dense today. Back then, the drive didn't even control track width, the controller card did. And it was exposed to the OS.
I remember turning my "20MB", yes MB drive into a 30MB drive by messing with the track width. Of course, this was the time when people had Commodore 64 300baud modems, and would overclock them and get 450baud out of them.
In my computer club, we wrote a little piece of software to see which of us could get the highest bandwidth on a modem, one was even capable of just over 500baud!
After ranking, we all agreed to "trade down", so the guy with the fastest modem swapped his with the owner of the local Punter BBS. Everyone else traded so we still had the same ranking. That way, the BBS would always be able to support everyone at max speed, and everyone would still be "lucky" in terms of "next fastest modem".
I can't imagine that happening today.
You could imagine what MFM drives were like, or you could read about it, in the link I gave.
I did read, but so ingrained is calling the "controller" MFM, that I literally thought it was referencing the standard, which I think was ST-506 (this was in 1983, so the timing seems to be right?).
EG, I literally thought of the controller and encoding as differing things, both separately called MFM. Ah well, it only took 40 years to discover differently.
Thanks for the link.
this principle applies to a lot of things. signaling for example. optical links. oldschool optical links (OC48 timeframe) did not feature scramblers and so a malicious packet could on occasion cause them to de-train and go out of sync since it's extended loss of light.
long since fixed but a common problem.
High-density NAND flash also needs "whitening", i.e. scrambling the data to be stored so that the number of 1s and 0s is even and randomly distributed, to avoid wearing some cells (the ones that are storing 0s) more than others, as well as reduce pattern-dependent disturb errors.
That said, digital storage media has been somewhat pattern-sensitive for a century or more: https://en.wikipedia.org/wiki/Lace_card
The self-synchronizing scrambler of 10GBase-SR and its relatives is a beautiful piece of engineering.
Interestingly, I heard that entrenched telco people were pushing for a much more complicated, SONET-ish approach. But classic Ethernet simplicity carried the day, and it's really nice...
Comment was deleted :(
I think my network card does that.
Elector magazine used to prank too; my favourite one was their "solar powered pocket torch." It wasn't rechargeable.
Okay I think I can clarify this: Electrons trapped in the gate (when storing a 0) come from the substrate. The substrate is connected to ground, and the “lost” electrons are replenished. So yes, net chip weight grows when 0s are written.
However, weight relative to what? All 0s on a chip will be heavier (the heaviest). All 1s would be the lightest. 50/50 1s and 0s would be the middle, which is where I’d expect generic “data” to fall.
The insulating oxide layers prevent the electrons from leaking out quickly, allowing data to persist for 10+ years under normal conditions.
In SLC flash, 10+ years is normal. Modern QLC is far more volatile: https://news.ycombinator.com/item?id=43739028
The article seems to imply that only the ones are real data. In fact every zero is just as important as a one.
Data has negative weight on punched cards or tape.
You need one of these !
https://www.eejournal.com/fresh_bytes/how-do-you-weigh-a-pro...
Light bulbs in video games use real electricity.
But what about the magnetic properties of SSDs? Any additive alignment for data?
Or the opposite, magnetic aligned fields for all 1’s or all 0’s?
Negligible now, but critically important effects to understand before we build a planet sized drive and wipe it!
Also, a planet sized drive will need to explicitly maintain large reserves of electrons. In theory, enough for an all ones (or zeros) state.
But that could be handled by tiling areas of one’s=high and zero’s=high. With tile charge flipping to maintain a balance in electron needs, locally and globally.
An encrypyted drive is likely to have (close to) equal numbers of 0's and 1's full or empty so any of these arguments are moot.
If the drive isn't encrypted, is it possible that controllers use some kind of encoding to balance out the number of bits, so that there's not a long run of 0s or 1s?
Yes, this is necessary for high density NAND flash and is referred to as "whitening" or "scrambling". Not needed at all for SLC or older MLC.
Was expecting Boltzmann and entropy to be involved at some point :(
Yeah, I was motivated to go Wiki diving, where I just learned about the Shannon (unit). https://en.wikipedia.org/wiki/Shannon_(unit)
Time to replace "I'm zero surprised" with "That's a zero Shannon event"
I'm Mega-Shannoned! Mega-Shannoned, I tell you, to learn that gambling is going on here.
Could you spin an SSD on a string really fast and load data when it’s on one side and delete it on the other and create forward motion?
Massless propulsion??
The rate at which molecules of plastic sublimate off the surface of the enclosure is probably a much larger amount of mass. The rate increases with e^kT, where k is such that it doubles about every 10 degrees C. So if you get a drive and fill it with data (which warms it up significantly) the lost casing material will dominate the mass balance.
The rate at which particles of dust settle on the surface of the enclosure is even higher.
I guess it's because the 1s weigh more than 0s? Which is counterintuitive because the 0s are chubbier.
E = mc2 so m = E / c2
c is a really big number. c2 is a really really big number. E is small.
m is really really small.
Given the existence of Szilard's engine showing that information can be converted to energy, can we not conclude that any system storing information has potential energy and therefore mass?
Does information weigh anything?
"Data has weight, but only on SSDs" - Not just SSDs! Unless you always hang the chad, surely writing data onto punchcards reduces the weight of that 'storage medium'!
So it has negative weight in this case.
Negative mass = negative energy, so we should be able to make an Alcubierre Drive out of punched cards.
If some PMs today evaluate performance by the number of lines of code, I wonder if the punch card equivalent was weighing the punched-out holes that were removed by each developer.
You can do binary by etch glass and the more data you have, the less weight it has. Negative space is quite useful.
Lights in video games are real, but only if you're using an OLED or CRT.
Classic Cunningham’s Law… post the wrong answer and you’ll get the correct one. Then the comments can be used by LLM to output the correct answer!
Data has negative weight on optical media. The data gets burned off of it!
All data changes mass of the medium:
Every data storage media requires some work be done to it.
E=m^2
All data storage media has mass.
QED
Work being done to it will show up as heat, which indeed is subject to e=mc^2. But when it cools there's no residual mass.
The data doesn't actually have weight because they aren't going to store a 1 or a 0, but rather do something like store 01 vs 10.
Just because work is done doesn't mean the energy is stored. It could just result dissipate as heat.
As a counter-example, consider etching data in some form into another material, say stone or metal. You do work to remove the material you etch away, but you are removing material, so the final mass is actually less than you started.
That said, I believe most digital storage uses a high energy and low energy state to store 0 and 1, and in that case the high energy state will have (very, very slightly) more mass than the low energy state. But even then, having all bits in the high energy state would be the "heaviest", but would effectively have no data.
"All data changes mass of the medium"
The first line
This also applies, on a larger scale, when one adds data to a medium like a sheet of paper, the graphite or ink adds to the mass of the storage medium. But does this constitute data? The maximum mass would be achieved by covering the entire sheet with graphite/ink which, it could be argued is not data (unless you consider it to be a binary cell in a larger byte of data). I don't know the physics of thermal paper, but I suspect that it might be the opposite. My point? This is not evidence that data has mass, it is evidence that transcribing data onto a storage medium may change the mass of the storage medium, and that change maybe positive or negative.
Perhaps I should have this carved on my tomb stone...
All data storage and retrieval uses energy, which has some mass equivilant, but radiates as heat, magnetism, light, and electricity, which much produce molecular changes in the data medium, and surounding structures. Figuring out the net energy/weight balance for each possible data use is going to be way out there on the thinest limbs of conjecture. Increasing the degass rate or polymerising the surface of x, x¹, x²...
Another fun calculation is that due to special relativity, a hard drive that is spinning gains a certain amount of mass due to the rotational kinetic energy and E=mc^2.
Assuming the platter is 100g, 42mm, spinning at 7200RPM, there is about 25J of rotational kinetic energy, whose mass equivalent is 2.8x10^-13g (0.28 femtograms).
Assuming 200 electrons per NAND floating gate with 3bits/cell TLC on a 2TB SSD, there would be 5.3x10^14 electrons, weighing about 0.5 femtograms.
More appropriately data has a temperature.
Data does have real weight. In one of my early assignments my firmware was too large to fit on one EPROM. Naively I thought the hardware team could just add another EPROM to the board. Turns out while they had left place for another device, it would have exceeded the payload budget by a few grams. Had to go back and reduce the code by a few hundred bytes.
Now do fiber and tell me the relativistic mass of my router so my ISP can charge me an overweight fee.
interesting, I wonder if one can translate this into the amount of data on the drive ? Maybe it does not matter unless one cleared the drive using dd(1).
Also would trimming cause a different value even though the data size remains the same ? I would think so, assuming I understand trim.
[dead]
Crafted by Rajat
Source Code