Brisbane-based digital health company ResApp has been talking up its Smartcough-C clinical study for a number of months but has suffered a setback with tainted data caused by “procedural anomalies”.

Conducted at Massachusetts General Hospital, Cleveland Clinic, and Texas Children’s Hospital, the multi-site, double-blind trial of the company’s smartphone-based respiratory analysis software was intended to support ResApp’s FDA submission for a number of respiratory conditions.

However, the top line results are in, and it doesn’t look like things went according to plan. Citing unexpected issues with the clinical data, the company says the study failed to meet most of its primary endpoints for either negative or positive results.

“These are not the results that we expected given our experience in Australia,” Tony Keating, CEO and managing director of ResApp Health, said in a statement.

“It is obvious that a large number of tests have been affected by procedural anomalies and we now need to go through each case one by one to fully understand the results,” Keating said.

“I am confident that we can add another layer of detail to the next set of study protocols to deliver robust results and that we have adequate funding to complete a second US paediatric pivotal clinical study this US winter as well as continue and complete our adult program, including our US adult pivotal study which is also set to begin this US winter.”

ResApp’s technology uses algorithms to identify respiratory conditions based on recordings of cough sounds. Studies in the company’s native Australia have returned promising results.

In April 2016, ResApp achieved an accuracy of 89 per cent in a clinical study of 524 paediatric patients conducted by the company at Perth’s Joondalup Health Campus and Princess Margaret Hospital where the company is based. In a smaller trial of 243 adult patients, also at Joondalup, the company saw accuracy between 91 and 100 per cent.

But the Smartcough-C results appear to have been tainted by two data collection anomalies. First, contrary to instructions, many patients were treated before the recording was made, leading to a high level of inaccuracy.

Second, a number of the recordings had too much interference to be used in the study. When those recordings were excluded, even the few conditions that achieved high accuracy, such as bronchitis, ended up with such small sample sizes that they might still not be useful for an FDA submission.

The positive and negative agreement rates ranged from just 36 per cent positive agreement for asthma up to 95 per cent negative agreement rate for bronchiolitis.

This is a major setback for the company, but they hope to learn from these mistakes for future studies.

“The Smartcough-C data provides a valuable insight into the recruited US population and into US diagnosis practices,” Dr Udantha Abeyratne, chief scientist at ResApp Health, said in a statement.

“We can use this study data to retrain the algorithms to capture such differences and significantly boost the robustness of our algorithms as well as refine study procedures at the participating hospitals to deliver results which are more representative of the algorithms’ capabilities.”

This isn’t the first issue the study has had. In April, ResApp announced it had to expand the study to compensate for unseasonably low pneumonia rates in the study population.





White papers