getnstimeofday() suffers from the overflow in y2038 on 32-bit
architectures and requires a conversion into the nanosecond format that
we want here.
This changes ssp_parse_dataframe() to use ktime_get_real_ns() directly,
which does not have that problem.
An open question is what time base should be used here. Normally
timestamps should use ktime_get_ns() or ktime_get_boot_ns() to read
monotonic time instead of "real" time, which suffers from time jumps
due to settimeofday() calls or leap seconds.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
static int ssp_parse_dataframe(struct ssp_data *data, char *dataframe, int len)
{
int idx, sd;
- struct timespec ts;
struct ssp_sensor_data *spd;
struct iio_dev **indio_devs = data->sensor_devs;
- getnstimeofday(&ts);
-
for (idx = 0; idx < len;) {
switch (dataframe[idx++]) {
case SSP_MSG2AP_INST_BYPASS_DATA:
}
if (data->time_syncing)
- data->timestamp = ts.tv_sec * 1000000000ULL + ts.tv_nsec;
+ data->timestamp = ktime_get_real_ns();
return 0;
}