AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_8gSjgXg5U.json
mfromm's picture
Upload 3539 files
fad35ef
{"forum": "8gSjgXg5U", "submission_url": "https://openreview.net/forum?id=qGWHiEgYAs", "submission_content": {"keywords": ["joint learning", "liver lesions", "lesion classification", "lesion segmentation", "CT"], "TL;DR": "We compare different approaches to fine-tuning based transfer learning to improve joint segmentation and classifications of liver lesions in CT images.", "track": "short paper", "authorids": ["[email protected]", "[email protected]"], "title": "Joint Liver Lesion Segmentation and Classification via Transfer Learning", "authors": ["Michal Heker", "Hayit Greenspan"], "paper_type": "well-validated application", "abstract": "Transfer learning and joint learning approaches are extensively used to improve the performance of Convolutional Neural Networks (CNNs). In medical imaging applications in which the target dataset is typically very small, transfer learning improves feature learning while joint learning has shown effectiveness in improving the network's generalization and robustness. In this work, we study the combination of these two approaches for the problem of liver lesion segmentation and classification.\nFor this purpose, 332 abdominal CT slices containing lesion segmentation and classification of three lesion types are evaluated. For feature learning, the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge is used.\nJoint learning shows improvement in both segmentation and classification results.\nWe show that a simple joint framework outperforms the commonly used multi-task architecture (Y-Net), achieving an improvement of 10% in classification accuracy, compared to 3% improvement with Y-Net.", "paperhash": "heker|joint_liver_lesion_segmentation_and_classification_via_transfer_learning", "pdf": "/pdf/52921a88ea63bf67d90afb9a52cd7ab6d4206a06.pdf", "_bibtex": "@inproceedings{\nheker2020joint,\ntitle={Joint Liver Lesion Segmentation and Classification via Transfer Learning},\nauthor={Michal Heker and Hayit Greenspan},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=qGWHiEgYAs}\n}"}, "submission_cdate": 1579955791787, "submission_tcdate": 1579955791787, "submission_tmdate": 1588098542366, "submission_ddate": null, "review_id": ["7_XKiTKaKZ", "_RSmtgNi8e", "JBrSzRyHul", "vLCITCEc32"], "review_url": ["https://openreview.net/forum?id=qGWHiEgYAs&noteId=7_XKiTKaKZ", "https://openreview.net/forum?id=qGWHiEgYAs&noteId=_RSmtgNi8e", "https://openreview.net/forum?id=qGWHiEgYAs&noteId=JBrSzRyHul", "https://openreview.net/forum?id=qGWHiEgYAs&noteId=vLCITCEc32"], "review_cdate": [1584136686883, 1583878718717, 1583877823319, 1583138892934], "review_tcdate": [1584136686883, 1583878718717, 1583877823319, 1583138892934], "review_tmdate": [1585229591396, 1585229590870, 1585229590374, 1585229589826], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper322/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper322/AnonReviewer2"], ["MIDL.io/2020/Conference/Paper322/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper322/AnonReviewer4"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["8gSjgXg5U", "8gSjgXg5U", "8gSjgXg5U", "8gSjgXg5U"], "review_content": [{"title": "Good short paper, requires some clarifications", "review": "I found the paper easy to read and containing interesting results about how some baseline models ourperform more sophisticated ones. I am recommending acceptance, but I have some remaining doubts that I would like to be answered, if possible. Namely:\n1) I do not see very clearly from the text what the authors mean by joint learning. I believe Figure 1 could serve the purpose of actually clarifying what is happening in each scenario; unfortunately, it has a very poor caption. Could the authors add a short description of each of the four schemes in that caption, and label them as a), b), c), and d)? I believe that would help a lot.\n2) I don't understand this sentence \"the lesion class is under-represented\". I guess it is because of the use of the word \"class\", which makes the reader think about classification. Do you actually mean \"slices containing liver lesions were under-represented as opposed as lesion-free slices\"? Because later in the text, the number of examples for each lesion class is mentioned, and they seem pretty balanced. Anyway, if what I am saying is the case, are you using weighted cross-entropy, rather you are oversampling slices with lesions during training?", "rating": "3: Weak accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Good application of previous techniques", "review": "In this paper, the authors combine the advantages of joint learning and transfer learning to improve the performance on\nliver lesion segmentation and classification. Although the techniques are not new, it is good to use them in new applications.\n\nI am just curious about the performance in the following settings:\n(1) The segmentation performance on the authors' private data using the pretrained model from LiTS dataset.\n(2) The segmentation performance on the authors' private data if only finetuning a segmentation model instead of the joint model.\n\nSetting (1) can show us how much performance the finetuning can improve based on the good pretrained model from LiTS dataset.\nSetting (2) can show us whether the joint learning has a bad effect on the segmentation task.", "rating": "3: Weak accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "no novel approach; missing paper focus on application", "review": "(+) An exhaustive validation of different experimental setups for liver lesion segmentation and classification on a self-collected database is provided.\n\n(+) Figure 1 nicely summarizes the tasks and experimental setups.\n\n(+/-) The paper is mostly well written. However, I would recomment to restructure Section 2. Your paper is of type well-validated application. Thus, first describing the given segmentation and classification tasks and the collected data and afterwards experimental setups seems more resonable to me. In general, the paper focuses too much on the methodology of transfer and joint learning to my opinion.\n\n(-) Your motivation is quite weak. Why is such an automatic classification approach required? Please specify in abstract/introduction.\n\n(-) No comparison to existing approaches on liver lesion segmentation (e.g. winners of the LiTS Challenge) is performed.\n\n(-) Transfer, joint and multi-task learning are well known approaches to deal with limited data. No methodical tricks are presented. \n\n(-) \"The first framework incorporates a multi-task U-Net, [...]\" This is confusing, as Figure 1 shows the multi-task approach as third framework.", "rating": "2: Weak reject", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Some details missing, but looks like good work", "review": "This work is based on a private dataset of 332 CT slices (not volumes!) from 140 patients with three different types of annotated liver lesions (cysts, hemangiomas, mets). They compare two multi-task approaches for segmentation and classification and two baseline (ablation, single-task) approaches on this dataset. Additionally, the encoders are either randomly initialised or pretrained on ImageNet (out of domain) or the LiTS dataset (same domain).\n\nThe number of 2D slices being used is relatively small, which limits the contribution to some degree, but the setup is solid and the results definitely interesting.\n\nI missed some details on the architectures (numbers of filters, for instance) and possible image preprocessing. I also wondered if the number of resolution levels is really only 3, which would limit the receptive field (without knowing any details about the employed blocks, it is hard to guess, but it could be around 44\u00b2 pixels theoretical maximum, the ERF being even smaller).\n\nIt would also have been interesting to do an ablation study on the SE (squeeze & excitation) blocks, but at least they were used in all four compared approaches, so the comparison is fair.\n\nOverall, I would rate it between 3 and 4, but I do think it is a nice contribution to MIDL, so I voted 4 (\"strong\" accept).", "rating": "4: Strong accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1586297765295, "meta_review_tcdate": 1586297765295, "meta_review_tmdate": 1586297765295, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper322 by AreaChair1", "meta_review_metareview": "The following quotes from the reviews demonstrate important critical points sufficient to justify reject.\nno rebuttal was provided to address any of them:\n\n- \"transfer, joint and multi-task learning are well known approaches to deal with limited data\",... \"the techniques are not new\", \"application of previous techniques\", \"no novel approach\"\n\n- \"motivation is quite weak\"\n\n- \"no comparison to existing approaches on liver lesion segmentation\"\n\n\nIn summary, rejection is justified by lack of technical novelty, weak motivation, and lack of comparison. Most reviewers also pointed out some issues related to clarity and lack of details and focus.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper322/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=qGWHiEgYAs&noteId=txPS15A1Sul"], "decision": "reject"}