AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_D5lK-IW_xS.json
mfromm's picture
Upload 3539 files
fad35ef
{"forum": "D5lK-IW_xS", "submission_url": "https://openreview.net/forum?id=Z4VtzvKT91", "submission_content": {"keywords": ["Convolutional Neural Networks", "Echocardiography", "Segmentation", "Data Augmentation"], "TL;DR": "Accumulating outputs from multiple models, or over augmentations of the test image on a single model, can improve poor semantic segmentation.", "track": "short paper", "authorids": ["[email protected]"], "title": "Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets", "authors": ["Joshua V. Stough"], "paper_type": "methodological development", "abstract": "The automatic segmentation of heart substructures in 2D echocardiography images is a goal common to both clinicians and researchers. Convolutional neural networks (CNNs) have recently shown the best average performance. However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs: model averaging and augmented inference. Model averaging involves training multiple instances of a CNN with data augmentation over a sampled training set. Augmented inference involves accumulating network output over augmentations of the test image. Using the recently released CAMUS echocardiography dataset, we show significant incremental improvement in outlier performance over the baseline model. These encouraging results must still be validated against independent clinical data.", "paperhash": "stough|model_averaging_and_augmented_inference_for_stable_echocardiography_segmentation_using_2d_convnets", "pdf": "/pdf/fb9cd852680022ccbe66308724231378583f4998.pdf", "_bibtex": "@misc{\nstough2020model,\ntitle={Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets},\nauthor={Joshua V. Stough},\nyear={2020},\nurl={https://openreview.net/forum?id=Z4VtzvKT91}\n}"}, "submission_cdate": 1579955781051, "submission_tcdate": 1579955781051, "submission_tmdate": 1587172199973, "submission_ddate": null, "review_id": ["l2MbNyEgSjP", "M1kkEjBPQTs", "yY-pqA1VlE2", "b_hjgIKX6U"], "review_url": ["https://openreview.net/forum?id=Z4VtzvKT91&noteId=l2MbNyEgSjP", "https://openreview.net/forum?id=Z4VtzvKT91&noteId=M1kkEjBPQTs", "https://openreview.net/forum?id=Z4VtzvKT91&noteId=yY-pqA1VlE2", "https://openreview.net/forum?id=Z4VtzvKT91&noteId=b_hjgIKX6U"], "review_cdate": [1584735494183, 1584662517976, 1584660701567, 1584192158471], "review_tcdate": [1584735494183, 1584662517976, 1584660701567, 1584192158471], "review_tmdate": [1585229442152, 1585229441589, 1585229441068, 1585229440568], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper302/AnonReviewer6"], ["MIDL.io/2020/Conference/Paper302/AnonReviewer5"], ["MIDL.io/2020/Conference/Paper302/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper302/AnonReviewer4"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["D5lK-IW_xS", "D5lK-IW_xS", "D5lK-IW_xS", "D5lK-IW_xS"], "review_content": [{"title": "Review of paper: Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets ", "review": "-In the methods section, the authors claim to describe their model. However, all we have is the description of a U-Net. Any modification was made to the U-Net?\n - In the results section, it is not clear why the authors chose only one fold? Furthermore, it is unclear by how much the results got improved by the proposed method\n - In the conclusion it is said '... augmented inference *may* dramatically improve...', does that mean that it sometimes work and sometimes not? Please be more clear.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Lacking improvements and results", "review": "Contributions:\n\nThe authors propose to do ensemble learning (1) to further improve dice score as well as \"accumulating\" the predictions of a single model over test-time augmentation (2) to improve outlier performance . The data used came from the CAMUS dataset, and the model is a U-Net, the same architecture used in the original CAMUS paper.\n\nMethod:\nFor contribution 1, the authors split the patients into 10 folds, kept two as the testing set, and trained eight separate models on the remaining folds, keeping a different separate fold as validation set for all the models. A different model was trained for the two views available for each patient, totaling 16 models trained.\n\nFor contribution 2, the authors \"accumulate\" the predictions of a single model trained over a single fold by augmenting a test image 200 times via a combination of intensity modification, rotation and Gaussian noise. \n\nResults:\nFor contribution 1, a box plot of dice distribution is reported over different structures, separated by view and phase. The results are shown for a single model against the proposed ensemble of models.\nFor contribution 2, the dice score improvement for the accumulated result is reported for a single test image, as well as a qualitative assessment of the segmentation for the same image.\n\nCriticism:\nEnsemble learning is generally recognized as an easy way to improve results on virtually any task. However, it is not a cheap method and requires n-times the amount of memory and training time. In itself, the reviewer feels it cannot be considered an improvement of a method. In this particular case, as figure 1 shows, the ensembling can hardly be justified as improvements shown via box plot seem to be marginal, at a cost of 8 times the amount of memory.\n\nContribution 2 seems to have significant improvements over the baseline, the authors' own U-Net trained on a single fold. However, test-time augmentation is another commonly used practice and the reviewer also feels it is not a novel idea in itself. Furthermore, it is unclear what \"accumulating\" means, whether it is taking the overlap of the 200 predictions of the noisy image, a threshold per pixel, or any other method. Finally, the reported results are vague and only from a single hand-picked outlier test image. Nothing can be confidently inferred from this result.\n\nWhile the two contributions are orthogonal, no results are reported on the application of the two contributions at the same time.\n\nFinally, only the dice score is reported, while the original CAMUS paper also reported Hausdorff distance and mean absolute distance.\n\nConclusion:\nThe paper does not present any novel idea for cardiac segmentation. Even though the presented article is a short paper, the article glosses over important details and fails to present meaningful results.", "rating": "1: Strong reject", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Model for improved performance on outliers on echocardiography segmentation using test-time augmentation and ensembling", "review": "- The paper is very clearly written and the methods clearly described. The method involves ensembling 8 U-net models, trained on different overlapping folds of the echocardiography data with on-the-fly augmentations, and then applying test time augmentation by introducing 200 rotation variations and averaging the (unrotated) predictions.\n- The data is split into 10 folds initially, where 2 are held out as test data. The 8 U-net models are trained on 7/8 remaining folds in rotation (with the remaining 1/8 held out for validation on each of these splits). The ensembled prediction is compared to a baseline U-net trained only on a single fold. This however is not a fair comparison, as the ensemble ultimately sees all the data from the 8 folds across the 8 trained models, so the baseline effectively learns from 12.5% fewer real training images. Nonetheless, it is well established that ensembling improves over single models, as also demonstrated in the paper.\n- Test time augmentation improves segmentation results compared to the baseline model too. It is unclear whether test time augmentation improves over the ensemble model without test time augmentation however.\n- Both ensembling and test time augmentation are well established approaches in the literature. There is limited novelty in the proposed work, although clear improvements over a U-net baseline are shown.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting paper but rather incremental contribution with very brief experimental evaluation ", "review": "The paper proposes to improve results of echocardiography imagery segmentation using model averaging and augmented inference. These ideas are not particularly novel, but have proven to be valuable in multiple recent studies. \nIn particular, the authors claim that averaging the predictions from multiple models improves performance and avoid the spectacular failures the single model prediction may sometimes exhibit. Additionally, data augmentation at test time also improves the results making them more stable. \n\nThe authors have trained and evaluated their method on data from the CAMUS dataset. This dataset is pretty large and the data variability observed there is sufficient to evaluate the generalisation capabilities of the method proposed by the authors.\n\nUnfortunately, I find that the evaluation is not complete. First of all the authors only compare a randomly picked model from their 8-fold cross validation strategy with the average of the 8 fold. Would be interesting to see how a single model performs compared to an average of 2, 3, 4,..., 8 models. More importantly, it would be very good to see how the average of different architectures would work. \n\nAdditionally, the authors seem to state that test-time augmentation has been only done on one example, which is the one used for qualitative analysis and that is reported in figure. It would have been really great to see a formal comparison of the performance with and without test time augmentation for the whole test set. \n\nImportantly, the box plot visualisation of the results leaves too much to the imagination of the readers. It would have been much better to include a table with results. Through a table, it would have been possible to show results for more experiments, even though some visibility on outliers might be lost (compared to box plots).\n\nI have no doubt that the technique proposed in the paper is valuable. Given the length constraints of short papers I also understand the fact that the experimental evaluation is compact. I still think it could have been better, via a table and show different angles over the advantages brought by the proposed technique. ", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1585341197731, "meta_review_tcdate": 1585341197731, "meta_review_tmdate": 1585341197731, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper302 by AreaChair1", "meta_review_metareview": "This is a well written paper. But like the reviewers, I lean towards a weak reject as the improvements of the proposed method are quite humble to say the least (c.f. Fig1). Furthermore, clear statistics on the reduction of the number of outliers are missing. This is too bad considering that this was the gaol mentioned in the abstract :\n\n\" However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs\"", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper302/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Z4VtzvKT91&noteId=Wxgzp9eeDA"], "decision": "reject"}