Hello
Today I made an algorithm that checks the correctness of data blocks in a batch downloaded from the K9GAG08U0E memory.
For the 65 blocks of memory read, let me remind you, each page consists of 128 pages, and each page is divided into 8 fragments, the statistics are as follows:
STAT: OK = 42220 COR = 32 FF = 23704 00 = 0 ERR = 0 ERR24 = 604
where:
OK - a valid fragment was found, the calculated BCH matches the one in the memory
COR - The sum was in error but the bits were corrected or reconstructed
FF - all bytes contain 0xFF
00 - all bytes contain 0x00
ERR - an attempt to restore the bits has failed
ERR24 - the number of erroneous bits is greater than 24
Added after 10 [hours] 55 [minutes]: Hello
Today there is an update of the statistics:
STAT: OK = 123574 COR = 1038 FF = 506936 00 = 2048 ERR = 0 ERR24 = 35076
00 - we have 2048, which means two full blocks damaged.
Please look at the large number of ERR24, here is an example:
BCH: T:1 L:0 B:653 P:30 Offset:0 Count bit:166 ERR BIT>24
BCH: T:1 L:0 B:653 P:30 Offset:1 Count bit:162 ERR BIT>24
BCH: T:1 L:0 B:653 P:30 Offset:2 Count bit:158 ERR BIT>24
BCH: T:1 L:0 B:653 P:30 Offset:3 Count bit:175 ERR BIT>24
BCH: T:1 L:0 B:653 P:30 Offset:4 Count bit:169 ERR BIT>24
BCH: T:1 L:0 B:653 P:30 Offset:5 Count bit:159 ERR BIT>24
BCH: T:1 L:0 B:653 P:30 Offset:6 Count bit:176 ERR BIT>24
BCH: T:1 L:0 B:653 P:30 Offset:7 Count bit:160 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:0 Count bit:182 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:1 Count bit:150 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:2 Count bit:174 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:3 Count bit:165 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:4 Count bit:162 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:5 Count bit:172 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:6 Count bit:175 ERR BIT>24
BCH: T:1 L:0 B:653 P:32 Offset:7 Count bit:159 ERR BIT>24
BCH: T:1 L:0 B:653 P:33 Offset:7 Count bit:1 OK
BCH: T:1 L:0 B:653 P:34 Offset:0 Count bit:163 ERR BIT>24
BCH: T:1 L:0 B:653 P:34 Offset:1 Count bit:151 ERR BIT>24
BCH: T:1 L:0 B:653 P:34 Offset:2 Count bit:182 ERR BIT>24
BCH: T:1 L:0 B:653 P:34 Offset:3 Count bit:166 ERR BIT>24
BCH: T:1 L:0 B:653 P:34 Offset:4 Count bit:158 ERR BIT>24
BCH: T:1 L:0 B:653 P:34 Offset:5 Count bit:166 ERR BIT>24
BCH: T:1 L:0 B:653 P:34 Offset:6 Count bit:172 ERR BIT>24
BCH: T:1 L:0 B:653 P:34 Offset:7 Count bit:169 ERR BIT>24
BCH: T:1 L:0 B:653 P:35 Offset:5 Count bit:177 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:0 Count bit:153 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:1 Count bit:167 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:2 Count bit:188 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:3 Count bit:179 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:4 Count bit:172 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:5 Count bit:178 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:6 Count bit:165 ERR BIT>24
BCH: T:1 L:0 B:653 P:36 Offset:7 Count bit:172 ERR BIT>24
Strongly different bits indicate that the data in this passage has been abandoned and saved elsewhere. According to the specificity of Wear Leveling.
I do not find the position yet where information is written about which fragments are currently used - that is, they are important.
Added after 9 [hours] 15 [minutes]: Today I made the statistics of a certain batch to K9GAG08U0E marked as OK named: GQ5X_K9GAG08U0E_D5500_OK.bin.
Due to a minor error, only 2064 blocks were calculated instead of 2074, but the following conclusions can be drawn:
STAT Block: 2064 OK = 768953 COR = 13780 FF = 1104967 00 = 24576 ERR = 0 ERR24 = 201260
And below is the statistic of the number of fragments with differing bits (0- means that the bits did not differ):
STAT: 0 = 768953
STAT: 1 = 13125
STAT: 2 = 632
STAT: 3 = 22
STAT: 4 = 1
The conclusion is that the theoretically efficient memory had 13,780 fragments with bad bits, but could be corrected. Most were 4 bits incorrect in a single fragment.
But the more strange thing is that there are fragments that do not have the correct parity bits and there are a lot of them, as much as 9.5% of the entire capacity. If statistically speaking, the number of parity error bits is in the range of about 150-180 bits.
At the moment, I am not able to explain why this is happening and where is the logic here. Even if I considered this data to be irrelevant, i.e. saved earlier, but already out of date, because there is a new version of data saved in a different memory area, including Wear Leveling, I still cannot understand why the parity bits do not match. The more that this memory has the NOP (Number of programming) parameter is 1.
Enclosed is the complete file from the batch analysis process.