The 580 has a 10-bit ADC, you multiply the values that your get from your ADC with the reference voltage (1.2V or 3.6V (with the attenuator)) divided by 1023 meaning (Reference Voltage/1023) * ADC value.
The purpose of getting two sample values is related with ADC calibration. Actually we should calculate the negative and positive calibration settings and config them to calibration HW registers, but at the moment for each calibration setting, we get a sample, then we average the value of those two samples in order for the result to be more accurate.
The code that applies to the battery application doesn't exactly average the accumulated sample, if you check the battery life estimation functions, you will see that in these functions the life of the battery is estimated based on a 2048 sample range. The battery estimation algorithm uses the sum of both samples.
So if I am using ADC to read an analog signal should I follow the same method? What is the reference voltage if I follow 2048 method? How does it affect on the accuracy, if I use a single read? 是否有一个ny gain at the ADC?
The impelementation is applied for the battery measurement and its an algorithm suitable for this, the reference voltage of the ADC is 1.2 or 3.6 when the attenuator is used. My personal opinion is not to use the same method as the battery example since the battery channel is internal, you should use just the single mode for your measuments. Regarding the gain, i suppose that you mean the gain error, yes it has a gain error that slightly reduces the effective input scale up to 50 mV.
Hi Bharath,
The 580 has a 10-bit ADC, you multiply the values that your get from your ADC with the reference voltage (1.2V or 3.6V (with the attenuator)) divided by 1023 meaning (Reference Voltage/1023) * ADC value.
Thanks MT_dialog
Thank you, that was helpful.
In the battery level example code why is ADC sample read two times and added?
Hi Bharath,
The purpose of getting two sample values is related with ADC calibration. Actually we should calculate the negative and positive calibration settings and config them to calibration HW registers, but at the moment for each calibration setting, we get a sample, then we average the value of those two samples in order for the result to be more accurate.
Thanks MT_dialog
But in the code it is not averaged(means only two samples are added and then not divided by 2). What does this mean?
Hi Bharath,
The code that applies to the battery application doesn't exactly average the accumulated sample, if you check the battery life estimation functions, you will see that in these functions the life of the battery is estimated based on a 2048 sample range. The battery estimation algorithm uses the sum of both samples.
Thanks MT_dialog
So if I am using ADC to read an analog signal should I follow the same method? What is the reference voltage if I follow 2048 method?
How does it affect on the accuracy, if I use a single read?
是否有一个ny gain at the ADC?
Thanks
Bharath
Hi Bharath,
The impelementation is applied for the battery measurement and its an algorithm suitable for this, the reference voltage of the ADC is 1.2 or 3.6 when the attenuator is used. My personal opinion is not to use the same method as the battery example since the battery channel is internal, you should use just the single mode for your measuments. Regarding the gain, i suppose that you mean the gain error, yes it has a gain error that slightly reduces the effective input scale up to 50 mV.
Thanks MT_dialog